Coronavirus & A Little History

Dozens more people have died in the city at the centre of China’s coronavirus outbreak, where hospitals are severely undersupplied and understaffed and residents have described increasingly desperate conditions.
Chinese state media reported 57 new deaths on Monday, all but one in Wuhan, the capital of the central province of Hubei which has been under lockdown for almost two weeks as authorities try to contain the outbreak.
The foreign ministry issued an urgent appeal for protective medical equipment as the total number of casualties reached 361, surpassing deaths in mainland China caused by the 2002-03 Sars virus. The number of infections also jumped, passing 17,200.
“What China urgently needs at present are medical masks, protective suits and safety goggles,” the foreign ministry spokeswoman Hua Chunying told a press briefing.
Authorities in provinces that are home to more than 300 million people – including Guangdong, the country’s most populous – have ordered everyone to wear masks in public in an effort to contain the virus. But factories capable of producing about 20 million masks a day are only operating at between 60 and 70% of capacity, according to the ministry of industry.
Hua also criticised the US, where a ban on people travelling from China went into effect on Sunday. The US and other countries had “overreacted” to the outbreak and Washington had not given China any substantive help, instead only creating and spreading panic, Hua said.
Pakistan, one of China’s allies, said on Monday it was resuming air travel after a three-day suspension.
World Health Org. director-general Tedros Ghebreyesus said travel bans were unnecessary.
“There is no reason for measures that unnecessarily interfere with international travel and trade,” he said. “We call on all countries to implement decisions that are evidence-based and consistent.”
A video apparently filmed in No 5 Wuhan hospital went viral, showing body bags in a bus, and a man weeping next to his dead father. In the video, the person filming says: “So many people just died. There are so many dead bodies … They are still moving bodies.”
On Monday, Chinese leader Xi Jinping held a meeting for top officials on the issue and called the outbreak a “major test” of China’s governance system and capabilities. Xi said that officials who failed to perform their duties “would be punished”. The meeting called for the country to “confront” weaknesses exposed by the epidemic and improve its emergency response capabilities and public health system.
The state news agency Xinhua said 68 medical teams of 8,300 staff from across China had been sent to Hubei. One of two new emergency hospitals built in the last 10 days to house patients infected by the virus was due to open on Monday. State media said 1,400 military medics would be sent to run the facility.
The virus has now spread to more than 24 countries. Several, including allies of Beijing, have begun to close their borders to Chinese nationals and travelers from the country.
So, what is the coronavirus and should we be worried?
It is a member of the coronavirus family that has never been encountered before. Like other coronaviruses, it has come from animals, or possibly seafood. New and troubling viruses usually originate in animal hosts. Ebola and flu are examples.
Severe acute respiratory syndrome (Sars) and Middle Eastern respiratory syndrome (Mers) are both caused by coronaviruses that came from animals.
The virus causes pneumonia. Those who have fallen ill are reported to suffer coughs, fever and breathing difficulties. In severe cases there can be organ failure. As this is viral pneumonia, antibiotics are of no use. The antiviral drugs we have against flu will not work.
If people are admitted to a hospital, they may get support for their lungs and other organs as well as fluids. Recovery will depend on the strength of their immune system. Many of those who have died are known to have been already in poor health.
Human to human transmission has been confirmed by China’s national health commission. As of February 3rd, 361 people have died in China, and one in the Philippines. Confirmed infections in China are 17,238, and the official Chinese figures include Taiwan, Hong Kong and Macau. Outside of China, infections stand at more than 150.
Two members of one family have been confirmed to have the virus in the UK, after more than 160 were tested and found negative. The actual number to have contracted the virus could be far higher as people with mild symptoms may not have been detected.
So, should we panic?
No. The spread of the virus outside China is worrying but not an unexpected development. The key concerns are how transmissible this new coronavirus is between people and what proportion become severely ill and end up in the hospital.
Human coronavirus was first discovered in 1965 and accounts for many cases of the common cold. The virus gets its name from its crown-like shape.
Coronaviruses are zoonotic, which means they’re transmitted between animals and people. SARS was transmitted from cats to humans and researchers suspect MERS is transmitted from camels to humans.

Coronaviruses affect all age groups and most are not dangerous. They often only cause mild symptoms like a stuffy nose, cough and sore throat that can be treated with rest and over-the-counter medications. Most coronaviruses spread like any other cold viruses spread, including:
through the air by coughing and sneezing
close personal contact, such as touching or shaking hands with someone who’s sick
touching an object with the virus on it, then touching your mouth, nose or eyes
The U.S. Centers for Disease Control (CDC) says people in the United States who get a coronavirus will usually get infected in the fall and winter, though it can happen any time of the year.
Most people will get infected with one or more of the common human coronaviruses during their lifetime.
Severe cases can lead to pneumonia, acute respiratory syndrome, kidney failure and even death.
The new coronavirus’s incubation period is still unknown. However, health officials at the WHO assume it is about 14 days; they are not aware if people are contagious during the incubation period.
Could the coronavirus trigger a pandemic here in the US? If so, what can be done?
To answer that question, let’s look to our history.
Throughout history, influenza viruses have mutated and caused pandemics or global epidemics. In 1918, an especially virulent influenza pandemic struck here in the United States, killing many Americans.
Illness from the 1918 flu pandemic, also known as the Spanish flu, came on quickly. Some people felt fine in the morning but died by nightfall. People who caught the Spanish Flu but did not die from it often died from complications caused by bacteria, such as pneumonia.
During the 1918 pandemic:
• Approximately 20% to 40% of the worldwide population became ill
• An estimated 50 million people died
• Nearly 675,000 people died in the United States
Unlike earlier pandemics and seasonal flu outbreaks, the 1918 pandemic flu saw high mortality rates among healthy adults. In fact, the illness and mortality rates were highest among adults 20 to 50 years old. The reasons for this remain unknown.
So where did the 1918 influenza come from? And why was it so lethal?
In 1918, the Public Health Service had just begun to require state and local health departments to provide them with reports about diseases in their communities. The problem? Influenza wasn’t a reportable disease.
The disease was first observed in Haskell County, Kansas, in January 1918, prompting local doctor Loring Miner to warn the U.S. Public Health Service’s academic journal.
On March 4th, 1918, company cook Albert Gitchell reported sick at Fort Riley, Kansas. By noon on March 11th 1918, over 100 soldiers were in the hospital. Within days, 522 men at the camp had reported sick. By March11th, 1918 the virus had already reached Queens, New York.
By May, reports of severe influenza trickled in from Europe. Young soldiers, men in the prime of life, were becoming ill in large numbers. Most of these men recovered quickly but some developed a secondary pneumonia of “a most virulent and deadly type.”
Within two months, influenza had spread from the military to the civilian population in Europe. From there, the disease spread outward—to Asia, Africa, South America and, back again, to North America.
In Boston, dockworkers at Commonwealth Pier called in sick in massive numbers during the last week in August. Suffering from fevers as high as 105 degrees, these workers had severe muscle and joint pains. For most of these men, recovery quickly followed. But 5 to 10% of these patients developed severe and massive pneumonia. Death often followed.
Within days, the disease had spread outward to the city of Boston itself. By mid-September, the epidemic had spread even further with states as far away as California, North Dakota, Florida and Texas reporting severe epidemics.
In its wake, the pandemic would leave about twenty million dead across the world. In America alone, about 675,000 people in a population of 105 million would die from the disease.
Entire families became ill. In Philadelphia, a city especially hard hit, so many children were orphaned that the Bureau of Child Hygiene found itself overwhelmed and unable to care for them.
As the disease spread, schools and businesses emptied. Telegraph and telephone services collapsed as operators took to their beds. Garbage went uncollected as garbage men reported sick. The mail piled up as postal carriers failed to come to work.
State and local departments of health also suffered from high absentee rates. No one was left to record the pandemic’s spread and the Public Health Service’s requests for information went unanswered.
As the bodies accumulated, funeral parlors ran out of caskets and bodies went uncollected in morgues.
In the absence of a sure cure, fighting influenza seemed an impossible task.

In many communities, quarantines were imposed to prevent the spread of the disease. Schools, theaters, saloons, pool halls and even churches were all closed. As the bodies mounted, even funerals were held outdoors to protect mourners against the spread of the disease.
Public officials, who were unaware that influenza was a virus and that masks provided no real protection against viruses, often demanded that people wear gauze masks. Some cities even passed laws requiring people to wear masks. Enforcing these laws proved to be very difficult as many people resisted wearing masks.
Advertisements recommending drugs which could cure influenza filled newspapers. Some doctors suggested that drinking alcohol might prevent infection, causing a run on alcohol supplies.
States passed laws forbidding spitting, fearing that this common practice spread influenza.
None of these suggestions proved effective in limiting the spread of the pandemic.
Public health officials sought to stem the rising panic by censoring newspapers and issuing simple directives. Posters and cartoons were also printed, warning people of the dangers of influenza.
As I stated earlier, by the time the pandemic had ended, in the summer of 1919, nearly 675,000 Americans were dead from influenza. Hundreds of thousands more were orphaned and widowed.
You say it can’t happen here? Well folks, state officials first reported on the presence of influenza in Missouri on October 11, 1918. However, influenza had appeared in the state long before that date. By the third week of October, 3,765 influenza cases and 90 deaths had been reported from St. Louis, with 558 cases and 13 deaths being reported for October 16th alone.
On October 25th, state officials maintained that “conditions are either stationary or improving” in the state. But on October 24th, the situation took a turn for the worse. Influenza began spreading into rural districts. Between October 26th and 28th, the situation continued to be dire, with rural and urban areas across the state reporting high numbers of cases and deaths.
On October 17th, The Kansas City Star announced that “A DRASTIC BAN IS ON.” All theaters, schools, and churches were closed. Public gatherings of twenty or more persons, including dances, parties, weddings, and funerals were banned. Entertainment in hotels, bars, and restaurants were banned as well.
Only twenty-five people were to be allowed in a store at any one time. Street cars were forbidden to carry more than twenty standing passengers. City officials also insisted that all elevators and streetcars be sterilized daily; telephone booths were to be sterilized twice a day. In an attempt to keep city streets clean, streets were flooded with water.
Officials were optimistic that these tactics would help contain the pandemic. But despite these efforts, Kansas City was struck especially hard by the pandemic, becoming one of the worst hit areas in the country. The situation was especially bad during the fall. Students at the American School of Osteopathy in Kirksville, Missouri graduated early so that they could join the fight against influenza.
In St. Louis, the mayor, Henry Keil, announced on October 7th that “Spanish influenza is now present” in the city. It will, he continued, “become epidemic.” Following this announcement, he ordered all theaters, schools, pool halls, cabarets, lodges, and dance halls to be closed and discontinued until further notice. Public funerals, Sunday schools, and conventions were also banned.
In late September, Missouri University students were asked to refrain from leaving Columbia on visits, and the public was asked to avoid crowded areas. A local physician announced that “everyone with a cold should be regarded and should regard himself with suspicion.”
Between September 26th and December 6th, over a thousand students at the university contracted influenza. Looking back on the pandemic and its impact on the university, a local doctor said “I saw one patient die within 18 hours of this disease and 12 hours after being put to bed. I have seen a number of others menaced with death during the first 48 hours of the disease.” He concluded that “the statement that influenza is uncomplicated is, I believe, erroneous.”

On October 7, 1918: Mayor James Boggs prohibited Columbians from meeting in places of amusement, schools, churches. The city and the university were quarantined. Only members of the Students’ Army Training Corps were allowed access to the campus for military training purposes. Influenza was widespread among the students and two new hospitals were opened to care for influenza patients; nurses came from St. Louis and Centralia to ease the load.
The disease peaked in the fall of 1918. It continued to be prevalent throughout the state during the winter and spring. It gradually disappeared during the summer.
So there you have it folks. What do you think?
Should we shut down our borders till we get a handle on this?
Are health official over-reacting or should there be more steps taken to protect US citizens?
Finally, the big question, “Are we being told the truth about the coronavirus?”

Executive Privilege

With all the discussion of the impeachment trial, one issue keeps coming up.
Executive privilege.
So what is it? Is it something new under the Trump Administration?
Are the democrats correct in saying the President can’t use it?
Who is telling the truth?
In the United States government, executive privilege is the power claimed by the President of the United States and other members of the executive branch to resist certain subpoenas and other interventions by the legislative and judicial branches of government to access information and personnel relating to the executive branch.
The concept of executive privilege is not mentioned explicitly in the United States Constitution, but the Supreme Court of the United States ruled it to be an element of the separation of powers doctrine, and derived from the supremacy of the executive branch in its own area of Constitutional activity.
The Supreme Court confirmed the legitimacy of this doctrine in United States v. Nixon, but only to the extent of confirming that there is a qualified privilege.
Once invoked, a presumption of privilege is established, requiring the Prosecutor to make a “sufficient showing” that the “Presidential material” is “essential to the justice of the case” (418 U.S. at 713-14). Chief Justice Burger further stated that executive privilege would most effectively apply when the oversight of the executive would impair that branch’s national security concerns.
Dwight Eisenhower, not President Trump, was the first president to use the phrase “executive privilege” after refusing to allow his advisers to testify at a Senate hearing in May 1954. Eisenhower believed that what is said in the White House should stay in the White House.
“Any man who testifies as what he told me won’t be working for me that night,” the president said.
Executive privilege is “the right of the president and high-level executive branch officers to withhold information from Congress, the courts and ultimately the public,” according to Mark Rozell, dean of the Schar School of Policy and Government at George Mason University. Executive privilege, Rozell wrote, can be used to protect national security and “the privacy of White House deliberations.”
The U.S. Constitution makes no mention of the concept of executive privilege. However, presidents from George Washington to Trump have resisted demands to share sensitive information with Congress. Some have succeeded. But over the past few decades, presidents have lost key court battles to withhold information.
It all began in 1792, when Washington declared that he didn’t have to provide internal documents demanded for a congressional investigation into a disastrous military loss by Maj. Gen. Arthur St. Clair to Native Americans.
Treasury Secretary Alexander Hamilton warned that in the future, Congress “might demand secrets of a very mischievous nature.” But Washington eventually turned over papers that “the public good would permit.”
In 1804, President Thomas Jefferson claimed he was exempt from a subpoena for him to testify at the trial of his former vice president, Aaron Burr, who was charged with treason.
“Constantly trudging” to the trial in Richmond, he said, would prevent him from fulfilling his presidential duties. Chief Justice John Marshall, who was presiding over the trial, ruled that the president wasn’t exempt. Jefferson didn’t testify, but he did “voluntarily” provide documents sought by Burr, who was acquitted.
Presidential power expanded in 1833, when President Andrew Jackson refused a demand by the U.S. Senate to turn over a list of advisers whom he consulted before moving money from the national bank to state banks. “I have yet to learn under what constitutional authority that a branch of the Legislature has a right to require of me an account of any communications,” Jackson responded. The Senate voted to censure Jackson, but it still didn’t get Old Hickory’s documents.
President Grover Cleveland “almost single-handedly restored and strengthened the power” of the presidency by his frequent use of executive privilege, according to Henry Graff, professor emeritus of history at Columbia University. After taking office in 1885, Cleveland declined to hand over documents to Congress “in the fight over presidential appointments,” Graff wrote.
In 1909, President Theodore Roosevelt refused to give the Senate his administration’s papers about an anti-trust prosecution of U.S. Steel Corp. To make sure senators didn’t get the documents, Roosevelt had them moved to the White House.
“The only way the Senate or the committee can get those papers now is through my impeachment,” he declared.
The clash over presidential confidentiality grew more intense when Eisenhower ordered his defense secretary to not allow Army officials to testify at hearings led by Sen. Joseph McCarthy, R-Wis., into alleged communists in the Army.
On May 17, 1954, Eisenhower wrote a letter that cited the need for advisers in the executive branch to be in the position “to be completely candid” in providing internal advice.
At a meeting before issuing the letter, Eisenhower said he had tried to stay out of the “damn business on the hill.” But “I will not allow people around me to be subpoenaed.” McCarthy criticized the action as an “iron curtain,” but it was the beginning of the end of his red-scare hearings.
Eisenhower’s letter didn’t use the words “executive privilege,” but it soon established the practice under that name.
The Eisenhower administration used executive privilege a record 44 times, raising concern that the president had too much power. It was Eisenhower’s vice president, Richard Nixon, who changed that perception once he became president.
In 1973 Nixon invoked executive privilege to try stop a congressional order to release secret White House recordings that had been revealed during Senate Watergate hearings.
Nixon argued that “the special nature of tape recordings of private conversations is such that these principles (of executive privilege) apply with even greater force to tapes of private Presidential conversations than to Presidential papers.”
On July 24, 1974, the U.S. Supreme Court unanimously ruled that Nixon had to turn over the tapes. The justices upheld the right of executive privilege but they ruled that this privilege couldn’t be used to withhold material sought for a criminal proceeding.
Chief Justice Warren Burger, whom Nixon had appointed, noted the precedent of the decision in the Burr trial that a president was “not above the law.”
“The decision establishes the legal duty of even a President to furnish evidence of what was said in conversations with his closest aides when relevant to the trial of a criminal cause,” wrote former Watergate Special Watergate Prosecutor Archibald Cox.
“Nixon went too far when he claimed executive privilege in an attempt to conceal evidence of White House wrongdoing,” said George Mason University’s Rozell. “His actions had the consequence of giving executive privilege a bad name.”
The Nixon precedent also turned out to be bad news for President Bill Clinton, who invoked executive privilege 14 times during the investigation by independent counsel Kenneth Starr.
In 1988, a federal judge ruled that Clinton couldn’t use the action to block questioning of his aides about his relationship with White House intern Monica Lewinsky.
Since then, President George W. Bush asserted executive privilege six times. President Barack Obama took the action once in 2012, when his Justice Department refused to turn over documents sought by the Republican-controlled House for the “Fast and Furious” program to track guns. A negotiated settlement was reached in court, after seven years, an indication of how long such disputes can take if litigated.
Trump’s assertion of executive privilege could wind up being resolved in the courts as well.
So, historically, presidents have exercised executive privilege in two types of cases: those that involve national security and those that involve executive branch communications.
The courts have ruled that presidents can also exercise executive privilege in cases involving ongoing investigations by law enforcement or during deliberations involving disclosure or discovery in civil litigation involving the federal government.
Just as Congress must prove it has the right to investigate, the executive branch must prove it has a valid reason to withhold information.
While there have been efforts in Congress to pass laws clearly defining executive privilege and setting guidelines for its use, no such legislation has ever passed and none is likely to do so in the future.
Presidents most often claim executive privilege to protect sensitive military or diplomatic information, which if disclosed, could place the security of the United States at risk. Given the president’s constitutional power as commander and chief of the U.S. Military, this “state secrets” claim of executive privilege is rarely challenged.

Most conversations between presidents and their top aides and advisers are transcribed or electronically recorded. Presidents have contended that executive privilege secrecy should be extended to the records of some of those conversations.
The presidents argue that in order for their advisers to be open and candid in giving advice, and to present all possible ideas, they must feel safe that the discussions will remain confidential. This application of executive privilege, while rare, is always controversial and often challenged.
In the 1974 Supreme Court case of United States v. Nixon, the Court acknowledged “the valid need for protection of communications between high Government officials and those who advise and assist them in the performance of their manifold duties.”
The Court went on to state that “[h]uman experience teaches that those who expect public dissemination of their remarks may well temper candor with a concern for appearances and for their own interests to the detriment of the decision-making process.”
While the Court thus conceded the need for confidentiality in discussions between presidents and their advisers, it ruled that the right of presidents to keep those discussions secret under a claim of executive privilege was not absolute, and could be overturned by a judge.
In the Court’s majority opinion, Chief Justice Warren Burger wrote, “[n]either the doctrine of separation of powers, nor the need for confidentiality of high-level communications, without more, can sustain an absolute, unqualified Presidential privilege of immunity from judicial process under all circumstances.”
The ruling reaffirmed decisions from earlier Supreme Court cases, including Marbury v. Madison, establishing that the U.S. court system is the final decider of constitutional questions and that no person, not even the president of the United States, is above the law.
While Dwight D. Eisenhower was the first president to actually use the phrase “executive privilege,” every president since George Washington has exercised some form of the power.
As I stated earlier, in 1792, Congress demanded information from President Washington regarding a failed U.S. military expedition. Along with records about the operation, Congress called members of the White House staff to appear and deliver sworn testimony.
With the advice and consent of his Cabinet, Washington decided that, as the chief executive, he had the authority to withhold information from Congress. Although he eventually decided to cooperate with Congress, Washington built the foundation for future use of executive privilege.
Indeed, George Washington set the proper and now recognized standard for using executive privilege: Presidential secrecy must be exercised only when it serves the public interest. Bottom line, the real problem is the separation of powers in our government.
Josh Blackman a constitutional‐law professor at the South Texas College of Law in Houston, and a member of the CATO Institute recently wrote an article asking the question, “Is Trump actually restoring the Separation of Powers?
Our Constitution carefully separates the legislative, executive, and judicial powers into three separate branches of government: Congress enacts laws, which the president enforces and the courts review.
However, when all of these powers are accumulated “in the same hands,” James Madison warned in Federalist No. 47, the government “may justly be pronounced the very definition of tyranny.”
The rise of the administrative state over the last century has pushed us closer and closer to the brink. Today, Congress enacts vague laws, the executive branch increases unbounded discretion, and the courts defer to those dictates.
For decades, presidents of both parties have celebrated this ongoing distortion of our constitutional order because it promotes their agenda. The Trump administration, however, is now disrupting this status quo.
In a series of significant speeches at the Federalist Society’s national convention, the president’s lawyers have begun to articulate a framework for restoring the separation of powers:
First, Congress should cease delegating its legislative power to the executive branch; second, the executive branch will stop using informal “guidance documents” that deprive people of the due process of law without fair notice; and third, courts should stop rubber‐stamping diktats that lack the force of law.
Executive power is often described as a one‐way ratchet: Each president, Democrat or Republican, augments the authority his predecessor expanded.
These three planks of the Trump approach to separation of powers — delegation, due process, and deference — are remarkable, because they do the exact opposite by ratcheting down the president’s authority.
If Congress passes more precise statues, the president has less discretion. If federal agencies comply with the cumbersome regulatory process, the president has less latitude.
If judges become more engaged and scrutinize federal regulations, the president receives less discretion.
Each of these actions would weaken the White House but strengthen the rule of law. To the extent that President Trump follows through with this platform, he can accomplish what few thought possible: The unending creep of the administrative monster can be slowed down, if not forced into retreat.
Don McGahn, who serves as White House counsel, recently stated the fact that Congress gives the White House too much power. “Often Congress punts the difficulty of lawmaking to the executive branch,” he said, “then the judiciary concedes away the judicial power of the Constitution by deferring to agency’s interpretation of what Congress meant.”
One would think that a lawyer for the president would love this abdication by Congress and the courts. But no. Instead, McGahn praised a recent concurring opinion by Justice Thomas, in which Thomas “called for the non‐delegation doctrine to be meaningfully enforced” to prevent the “unconstitutional transfer of legislative authority to the administrative state.”
Again, reflect on the fact that if Justice Thomas’s position were adopted, much of Congress’s legislation — which carelessly lobs power to the White House with only the vaguest guidelines — would no longer pass constitutional muster.
The truth is, there is no need to rely on the Supreme Court to enforce the non‐delegation doctrine. The president has the power to veto half‐baked legislation. (Recall what Speaker Nancy Pelosi said of Obamacare: “We have to pass the bill so you can find out what is in it.”)
If Trump returned a bill to Congress, stating in his message that it failed to include sufficient guidelines, there would be a paradigm shift in Washington, D.C. Both Republicans and Democrats would have to go back to the drawing board and relearn how to legislate with more precision.
This process would strengthen the rule of law. Or Congress could simply override the veto and reaffirm that it has shirked its constitutional responsibility and could not care less about what this president, or any president for that matter, actually does.

The problems of the administrative state extend far beyond Congress’s delegations. “The Trump vision of regulatory reform,” he said, “can be summed up in three simple principles: due process, fair notice, and individual liberty.”
Generally, when an administrative agency wants to affect a person’s liberty or property, it must go through a fairly complicated and cumbersome process that seeks public input.
However, in recent decades, administrations of both parties have sought to bypass this process through the use of so‐called “sub‐regulatory actions.”
By issuing memoranda, guidance documents, FAQs, and even blog posts, agencies have avoided the need to formalize their rules. Yet they still expect Americans to comply with these documents or face ruinous fines or even litigation.
In particular, during the Obama administration, the Department of Education used “Dear Colleague” letters to deprive students of due process on college campuses. McGahn called these missives “Orwellian.” And he’s right. In September, Betsy DeVos, the secretary of education, rightfully rescinded these guidance documents, announcing that “the era of rule by letter is over.”
More recently, in another speech at the Federalist Society meeting, former Attorney General Jeff Sessions announced that his agency would cease issuing guidance documents that effect a change in the law.
Under the leadership of Associate Attorney General Rachel Brand, who also spoke at the convention, the Justice Department will review existing guidance documents and propose modifying or even rescinding some. “This Department of Justice,” Brand said, “will not use guidance documents to circumvent the rulemaking process, and we will proactively work to rescind existing guidance documents that go too far.”
This is a remarkable position, as it retroactively and prospectively constrains the ability of the Justice Department to expand its own authority.
In Federalist No. 51, James Madison wrote of the “great difficulty” in framing a government: “you must first enable the government to control the governed; and in the next place oblige it to control itself.” I couldn’t agree more.

Fake News & Critical Thinking

Giant man-bats that spent their days collecting fruit and holding animated conversations; goat-like creatures with blue skin; a temple made of polished sapphire. These were the astonishing sights witnessed by John Herschel, an eminent British astronomer, when, in 1835, he pointed a powerful telescope “of vast dimensions” towards the Moon from an observatory in South Africa. Or that, at least, was what readers of the New York Sun were told in a series of newspaper reports.
This caused a sensation. People flocked to buy each day’s edition of the Sun.
The paper’s circulation shot up from 8,000 to over 19,000 copies, overtaking the Times of London to become the world’s bestselling daily newspaper.
There was just one small hitch. The fantastic reports had in fact been concocted by Richard Adams Locke, the Sun’s editor.
Herschel was conducting genuine astronomical observations in South Africa. But Locke knew it would take months for his deception to be revealed, because the only means of communication with the Cape was by letter.
The whole thing was a giant hoax – or, as we would say today, “fake news”. This classic example illuminates the pros and cons of fake news as a commercial strategy – and helps explain why it has re-emerged in the internet era.
That fake news sells had been known since the earliest days of printing. In the 16th and 17th centuries, printers would crank out pamphlets, or newsbooks, offering detailed accounts of monstrous beasts or unusual occurrences.
A newsbook published in Catalonia in 1654 reports the discovery of a monster with “goat’s legs, a human body, seven arms and seven heads”; an English pamphlet from 1611 tells of a Dutch woman who lived for 14 years without eating or drinking.
So what if they weren’t true? Printers argued, as internet giants do today, that they were merely providing a means of distribution, and were not responsible for ensuring accuracy.
But newspapers were different. They contained a bundle of different stories, not just one, and appeared regularly under a consistent title. They therefore had reputations to maintain.
The Sun, founded in 1833, was the first modern newspaper, funded primarily by advertisers rather than subscriptions, so it initially pursued readership at all costs.
At first it prospered from the Moon hoax, even collecting its reports in a bestselling pamphlet. But it was soon exposed by rival papers.
Editors also realized that an infinite supply of genuine human drama could be found by sending reporters to the courts and police stations to write true-crime stories – a far more sustainable model.
As the 19th century progressed, impartiality and objectivity were increasingly venerated at the most prestigious newspapers.
But in recent years search engines and social media have blown apart newspapers’ bundles of stories.
Facebook shows an endless stream of items from all over the web. Click an interesting headline and you may end up on a fake-news site, set up by a political propagandist or a teenager in Macedonia to attract traffic and generate advertising revenue.
Peddlers of fake stories have no reputation to maintain and no incentive to stay honest; they are only interested in the clicks. Hence the bogus stories.
Thanks to the internet, fake news is again a profitable business. This growth of fabricated stories corrodes trust in the media in general, and makes it easier for unscrupulous politicians to peddle half-truths.
Media organizations and technology companies are struggling to determine how best to respond.
Perhaps more overt fact-checking or improved media literacy will help. But what is clear is that a mechanism that held fake news in check for nearly two centuries – the bundle of stories from an organization with a reputation to protect – no longer works.
Although the tricks of persuasion may be as old as time, that doesn’t mean we shouldn’t worry.
Fake news is sometimes hard to recognize for what it is, constantly evolving to fit seamlessly into our lives. We now primarily rely on news we get from our friends, families, and colleagues (rather than the once widely respected gatekeepers of reliable information, the traditional press).
What is unprecedented is the speed at which massive misinformation, from deliberate propaganda and fake news to trolling to inadvertent misunderstanding, flows around the world like “digital wildfire,” thanks to social media.
Hunt Allcott and Matthew Gentzkow’s recent study “Social Media and Fakes News in the 2016 Election” noted three things:
1. “62 percent of US adults get news on social media,”
2. “the most popular fake news stories were more widely shared on Facebook than the most popular mainstream news stories,” and
3. “many people who see fake news stories report that they believe them.”
In fact, the World Economic Forum in 2016 considered digital misinformation one of the biggest threats to global society. Researcher Vivian Roese furthermore points out that while traditional media has lost credibility with readers, for some reason internet sources of news have actually gained in credibility.
This may do lasting damage to public trust of the news, as well as public understanding of important issues, such as when scientific or political information is being repackaged and retold by the media, especially when coupled with our collectively deteriorating ability to interpret information critically and see propaganda for what it is.
Other research has also found that most readers spend most of their reading time scanning headlines rather than reading the story, in fact, “for the modern newspaper reader, reading the headline of a news item replaces the reading of the whole story.”
In today’s world, readers can have diverging interpretations of the same story, because they have not done their research and practiced critical thinking..
The difference lies in the editorial framing of a complex story for maximum eyeballs, particularly in the sneakiest signal of all: the humble headline.
This means that the headline, not the story, has become the single most important element of the news.
The headline is not merely a summary, picking out the most relevant aspect of the story, the way we tend to think of it.
Headlines are also actively designed to be attention-grabbing, persuading readers to read the story.
By telling its own micro story, quite apart from the news it accompanies and supports, it’s supposed to tell you just what you need to know, but it quite often tells you things you don’t.
It’s a linguistic trap that we don’t often notice, that can be easily exploited, and that makes the problem of “fake news” even more dangerous than we realize.
By now, we may think we know fake headlines well enough not to fall into the trap.
What we think of as the “prestige” media, news outlets with established reputations for careful journalism, are now often copying, intentionally or not, whatever happens to go viral on social media.
There isn’t anything particularly wrong with using the language of headlines that everyone else uses. But it is a signal that there may be something wrong with the news today, when the institution of the press is following the fashions of fake news found on social media.
In identifying misinformation, we often focus too closely on the superficial and obvious aspects of this shiny new concept of fake news—a fake headline accompanying a fake news article of actual falsehoods.
The public’s attention, after all, is a delicate beast, easily distracted.
Rather than newsworthiness being decided by a media gatekeeper, users actively have become their own gatekeepers, deciding whether content is “shareworthy”.
Stories go viral because of this “shareability” factor, but there may be no rhyme or reason as to why.
The traditional news can no longer just passively rely on their reputations to get their stories read. To survive, media publications have had to adapt their way of telling stories to social media standards, beating them for the scoop, in a competitive struggle for limited reader/viewer attention.
In doing so, they partially give up their role as gatekeeper to what is newsworthy, and the relationship of trust between the news outlet and its follower can start to erode, especially if expectations are not met.
This doesn’t mean a change in the accuracy or neutrality of their core coverage. But it results in a provocative framing for their headlines, tenuously true, that can leave a disastrously false impression.
So, at the outset, a headline, and how it’s framed, can do a lot of damage to how readers receive information and how they interpret that information.
Researchers conclude that “news consumers must be (made) aware that editors can strategically use headlines to effectively sway public opinion and influence individuals’ behavior.”
Based on all this information, it appears we, the public, have become the the new drivers of what is considered the truth when it comes to news.
So, how do we fix this mess?
Let’s start with a couple of quotes:
What is the hardest task in the world? To think. – Ralph Waldo Emerson
Thinking is the hardest work there is, which is the probable reason why so few engage in it. – Henry Ford

Every day, I’m amazed at the amount of information I consume; I listen to the news in the morning, check my social media accounts throughout the day, and watch some TV before I go to bed, all while getting constant updates via email and social media.
It can be overwhelming , but things get really interesting when some of that information is biased, inaccurate, or just plain made up. It makes it hard to know what to believe. Even with all the competing sources and opinions out there, getting the truth — or at least close to it — matters. What you believe affects what you buy, what you do, who you vote for, and even how you feel. In other words, it virtually dictates how you live your life.
So how can you figure out what is true and what is not?
Well, one way is by learning to think more critically. Critical thinking is as simple as it sounds — it’s just a way of thinking that helps you get a little closer to the best answer.
Critical thinking is just deliberately and systematically processing information so that you can make better decisions and generally understand things better.
So the next time you have a problem to solve, a decision to make or information to evaluate, here are methods you can use to help you find the truth.
1. Don’t Take Anything at Face Value
The first step to thinking critically is to learn to evaluate what you hear, what you read, and what you decide to do. So, rather than doing something because it’s what you’ve always done or accepting what you’ve heard as the truth, spend some time just thinking. What’s the problem? What are the possible solutions? What are the pros and cons of each? If you really evaluate things, you’re likely to make a better, more reasoned choice.
As the saying goes, “When you assume, you make an ass out of you and me.” It’s quite easy to make an ass of yourself simply by failing to question your basic assumptions.
Some of the greatest innovators in human history were those who simply looked up for a moment and wondered if one of everyone’s general assumptions was wrong. From Newton to Einstein, questioning assumptions is where innovation happens.
If everyone is thinking alike, then somebody isn’t thinking. – George S. Patton

2. Consider Motive
Where information is coming from is a key part of thinking critically about it. Everyone has a motive and a bias. Sometimes, it’s pretty obvious; other times, it’s a lot harder to detect. Just know that where any information comes from should affect how you evaluate it — and whether you decide to act on it.

3. Do Your Research
All the information that gets thrown at us on a daily basis can be overwhelming, but if you decide to take matters into your own hands, it can also be a very powerful tool. If you have a problem to solve, a decision to make, or a perspective to evaluate, start reading about it. The more information you have, the better prepared you’ll be to think things through and come up with a reasonable answer to your query.
I have a personal library of over 3500 books and I use them all the time for research. You have access to your local library and an unlimited amount of good info on the internet.
Don’t rely solely on Google. The Library of Congress online is a great source of information. Another search engine I use a lot is called Refseek (www.refseek.com) It contains over a billion books, documents, journals and newspapers.
When you’re trying to solve a problem, it’s always helpful to look at other work that has been done in the same area.
It’s important, however, to evaluate this information critically, or else you can easily reach the wrong conclusion. Ask the following questions of any evidence you encounter:
How was it gathered, by whom, and why?
4. Ask Questions
I sometimes find myself shying away from questions. They can make me feel a little stupid. But mostly, I can’t help myself. I just need to know! And once you go down that rabbit hole, you not only learn more, but often discover whole new ways of thinking about things. I tell people all the time, there are no stupid questions. That is how you learn.
Sometimes an explanation becomes so complex that the basic, original questions get lost. To avoid this, continually go back to the basic questions you asked when you set out to solve the problem. What do you already know? How do you know that? What are you trying to prove, disprove, demonstrated, critique, etc.?

5. Don’t always assume You’re Right
I know it’s hard. I struggle with the hard-headed desire to be right as much as the next person. Because being right feels great. But assuming you’re right will often put you on the wrong track when it comes to thinking critically. Because if you don’t take in other perspectives and points of view, and think them over, and compare them to your own, you really aren’t doing much thinking at all — and certainly not the critical kind.
Human thought is amazing, but the speed and automation with which it happens can be a disadvantage when we’re trying to think critically. Our brains naturally use mental shortcuts to explain what’s happening around us.
This was beneficial to humans when we were hunting large game and fighting off wild animals, but it can be disastrous when we try to decide who to vote for.
A critical thinker is aware of their biases and personal prejudices and how they influence seemingly “objective” decisions and solutions.
All of us have biases in our thinking–it’s awareness of them that makes thought critical.
6. Break It Down
Being able to see the big picture is often touted as a great quality, but I’d wager that being able to see that picture for all its components is even better. After all, most problems are too big to solve all at once, but they can be broken down into smaller parts. The smaller the parts, the easier it’ll be to evaluate them individually and arrive at a solution. This is essentially what scientists do; before they can figure out how a bigger system — such as our bodies or an ecosystem — works, they have to understand all the parts of that system, how they work, and how they relate to each other.

7. Keep It Simple
In the scientific community, a line of reasoning called Occam’s razor is often used to decide which hypothesis is most likely to be true. This means finding the simplest explanation that fits all facts. This is what you would call the most obvious explanation at least until it’s proven wrong. Often, Occam’s razor is just plain common sense. When you do your research and finally lay out what you believe to be the facts, you’ll probably be surprised by what you uncover. It might not be what you were expecting, but chances are it’ll be closer to the truth.
Some of the most amazing solutions to problems are astounding not because of their complexity, but because of their elegant simplicity. Look for the simple solution first.

Conclusion:
Critical thinking is not an easy topic to understand or explain, but the benefits of learning it and incorporating it into your life are huge.

Remember :
1. Don’t Take Anything at Face Value
2. Consider the Motive
3. Do Your Research
4. Ask Questions
5. Don’t always assume You’re Right
6. Break It Down
7. Keep It Simple

I will close with one final quote:
Anyone who stops learning is old, whether at twenty or eighty. Anyone who keeps learning stays young. – Henry Ford
What do you think? Can you adopt critical thinking in your life? Better yet, can you pass it on to those who refuse to use it?

2020: Five issues to think about.

Topic #1
Virginia’s attempt at gun control has brought about 2nd amendment sanctuary counties and local militias. I see this happening in many states throughout the upcoming year.
A few facts:
1. There were 600,000 deer hunters in Wisconsin this year. That number would make them the 8th largest army in the world. Larger than Iran. More than France & Germany combined.
2. Michigan had 700,000 deer hunters. Pennsylvania had 750,000. West Virginia had another 250,000.
3. Those 4 states alone would make up the largest army in the world.
4. The number of hunters in Texas alone would be the largest standing army in the world all by itself.
The point? Japanese Admiral Yamamoto was asked during WWII “Why did the Japanese not invade the US? His answer was, “Behind every blade of grass in the US is an American with a gun.”
The 2nd amendment is a matter of national defense.

Would you support gun confiscation laws such as those proposed in Virginia?
If not, would you support 2nd amendment sanctuary counties in Missouri and the formation of local militias if necessary?

2nd topic:

A continuum of criticism of Capitalism, glorification of Socialism, & climate change hysteria getting even worse than in 2019 as the year 2020 moves along toward the political conventions.
Democrats don’t want to be labeled as socialists, but the ideology is popular enough among some Democratic voters to become an important point of debate this election cycle. A 2018 Gallup poll found that most Democrats had a positive view of socialism, while less than half of young Americans between ages 18 to 29 had a positive view of capitalism.
House Speaker Nancy Pelosi has continuously rejected the socialist label for the democratic party. She argued in an interview with “60 Minutes” that Democratic lawmakers “know that we have to hold the center.” She added that if people in her party support socialism, “that’s their view. That is not the view of the Democratic Party.”
At least 46 democratic socialists running in the 2018 midterms won their primaries, while others were elected to public office in various local jurisdictions. Three democratic socialists won alderman positions in Chicago’s elections earlier this year, and most recently Tiffany Cabán ran a successful primary campaign for Queens District Attorney in New York.

So, while some centrist Democrats continue to distance themselves from democratic socialists, the party’s far left base sees an opportunity to keep winning in 2020.
It is this grass roots, ground up movement that needs to be watched.

3rd topic:
Donald Trump ordered an airstrike that killed Iran’s most powerful general in the early hours of Friday, January 3rd.
Qassem Suleimani was hit by the drone strike while local allies from the Popular Mobilisation Forces (PMF) drove him from Baghdad airport. The leader of the PMF, Abu Mahdi al-Muhandis, a close Suleimani associate, was also killed in the attack.
“General Suleimani was actively developing plans to attack American diplomats and service members in Iraq and throughout the region,” a Pentagon statement said. “This strike was aimed at deterring future Iranian attack plans.”
The strike came at a time when Iraq was already on the brink of an all-out proxy war, and hours after a two-day siege of the US embassy in Baghdad by a mob of PMF militants and their supporters.
That siege followed US airstrikes on camps run by a PMF-affiliated militia particularly closely aligned with Tehran, which in turn was a reprisal for that militia’s killing of a US contractor in an attack on an Iraqi army base on Friday.
The US Democratic presidential candidate Joe Biden, said Trump had “tossed a stick of dynamite into a tinderbox”. His fellow Democratic hopefuls Elizabeth Warren and Bernie Sanders warned the attack could spark a disastrous new war in the Middle East.

What are your thoughts. Was Trump justified in taking out this Iranian General?

4th topic:
Major shake-ups and surprises in the Democrat presidential sweepstakes early-on in 2020 and playing out before the Democrat National Convention.
1884: “Rum, Romanism and Rebellion”
In 1884, James G. Blaine, the Republican presidential nominee from Maine, attended a GOP meeting in October when a Presbyterian minister named Dr. Samuel Buchard accused the Democrats of representing “rum, Romanism, and rebellion” — that is, alcohol, Catholics, and the Confederacy.
Blaine didn’t object, a silence he later claimed was because either couldn’t hear the comment or wasn’t paying attention. But that didn’t matter: the public furor that followed cost Blaine thousands of votes from anti-prohibitionists, Roman Catholic immigrants, and southerners. The comment energized Irish voters in New York to vote against Blaine in droves, likely costing him the state—and with it, the election which Grover Cleveland won.
1980: Iran Holds Carter’s Campaign Hostage
In the Carter-Reagan election, October Surprises entered the world of conspiracy theories. As the story goes, Ronald Reagan was worried that a last-minute deal to release the American hostages in Iran would give President Jimmy Carter the support he needed to win reelection. Then, days before U.S. voters cast their ballots, Iran announced that the it would not release the hostages until after the election.
Allegations quickly took root over the cause of Iran’s statement. Jack Anderson of the Washington Post claimed that President Carter had been planning a military operation to save the hostages, hoping it would save him the election. Others alleged that Ronald Reagan had made a secret deal with the Iranians to postpone the hostage release and rob Jimmy Carter of victory.
That November, Reagan defeated Carter, and Iran continued to hold 52 Americans hostage, releasing them mere minutes after Ronald Reagan completed his inaugural address in January 1981. Political figures and hostages themselves demanded a probe into the timing of the incident, but Congress didn’t bite until later, when two congressional investigations found no evidence of a conspiracy between Reagan and Iran.
2000: George W. Bush’s DUI
George W. Bush and Al Gore were tied in national polls in the days leading up to the 2000 presidential election, but then Fox News Channel broke the biggest scandal of Bush’s campaign: 24 years earlier, Bush had been arrested for drunk-driving in Maine.
Though the Bush campaign told reporters that the incident was so long ago that it would do little to change voters’ minds, ten years later, Bush strategist Karl Rove wrote that he believes the scandal cost Bush five states. Though many would question that math—Rove believes that without the DUI news, Bush would have won the popular vote and the mess in Florida would have been avoided.

5th Topic: Education. I see a movement by the states to push for more trade schools, charter schools and home schooling in the upcoming year.
The Trump administration renewed its push for school choice on Thursday with a proposal to provide $5 billion a year in federal tax credits for donations made to groups offering scholarships for private schools, apprenticeships and other educational programs.
Education Secretary Betsy DeVos unveiled the plan as a “bold proposal” to give students more choices without diverting money from public schools.
“What’s missing in education today is at the core of what makes America truly great: freedom,” DeVos said. “Kids should be free to learn where and how it works for them.”
Legislation for the tax credits is being introduced by Sen. Ted Cruz, R-Texas, and Rep. Bradley Byrne, R-Ala.

State Militias. A good idea?

Next year, Democrats will control both houses of Virginia’s state Legislature as well as its governorship. On November 18, State Sen. Dick Saslaw introduced a bill that he will sponsor in the 2020 legislative session. That bill will outlaw not only the sale or transfer but also the possession of certain firearms.
Saslaw’s bill — SB 16 — provides that “It is unlawful for any person to import, sell, manufacture, purchase, possess or transport an assault firearm” and makes such actions a Class 6 felony. (In Virginia, Class 6 felonies are punishable by imprisonment for between one and five years.)
SB 16 provides that a wide range of center-fire rifles, pistols, and shotguns are included in the definition of to-be illegal firearms, these include:
1. A semi-automatic center-fire rifle with a fixed magazine capacity in excess of 10 rounds;
2. A semi-automatic center-fire rifle that has the ability to accept a detachable magazine and has one of the following characteristics: (i) a folding or telescoping stock; (ii) a pistol grip that protrudes conspicuously beneath the action of the rifle; (iii) a thumbhole stock; (iv) a second handgrip or a protruding grip that can be held by the non-trigger hand; (v) a bayonet mount; …
3. A semi-automatic center-fire pistol with a fixed magazine capacity in excess of 10 rounds;
Basically, every rifle of the common AR-15 design and a great many pistols and shotguns in common use for personal defense, target shooting, and hunting would be banned.
Not only would they be banned, but because SB 16 makes it illegal to possess such firearms, they also would have to be either surrendered to or seized by police authorities in the jurisdiction in which they are located.

In its 2008 decision in District of Columbia v. Heller, the Supreme Court decided that the right to keep and bear arms belongs to everyone.
The 27 words of the Second Amendment state: “A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.”
SB 16, if enacted, would go much further than any previous American gun control law by making possession of the covered firearms illegal, rendering them subject to seizure from their owners.
The first major U.S. gun control legislation was the National Firearms Act of 1934. Enacted as a result of the use of automatic weapons — principally the Thompson submachine gun — by outlaw gangs, the act made illegal the possession of machine guns and short-barreled shotguns unless the owner of one paid for and was issued a government tax stamp for it. The law’s constitutionality was affirmed by the Supreme Court as late as 1991.
Again, SB 16 makes possession of “assault weapons” illegal outright, with no means for law-abiding citizens to retain their possession of such weapons in their homes, businesses, or in sporting groups of hunting or target shooting.
If SB 16 is enacted by the Democrat-dominated Legislature and signed by Virginia’s gun-control-minded Gov. Ralph Northam, its effect will be blocked for months or years by legal challenges to its perfectly clear unconstitutionality.
In determining the constitutionality of that law, the courts will have to consider a long line of gun control decisions of the Supreme Court.
D.C. v. Heller ruled (in a brilliant decision written by the late Justice Antonin Scalia) that the Second Amendment is a personal right that is not limited by the prefatory phrase about well-regulated militias. In the decision, Scalia referred to historical sources, such as 18th-century dictionaries, to prove that the definition of “arms” included not only weapons of war but also all firearms. Scalia’s decision states specifically that

Some have made the argument, bordering on the frivolous, that only those arms in existence in the 18th century are protected by the Second Amendment. We do not interpret constitutional rights that way. Just as the First Amendment protects modern forms of communications … and the Fourth Amendment applies to modern forms of search … the Second Amendment extends to all instruments that constitute bearable arms, even those that were not in existence at the time of the founding.
Virginia cannot constitutionally take away its citizens’ individual rights to self-protection. If enacted, SB 16 would do just that.
So what can be done?
The residents of Tazewell County, have come up with their own solution.
On Tuesday, December 10th, the Board of Supervisors passed two different resolutions.
The first resolution declared the County to be a Second Amendment Sanctuary.
The second promoted the order of militia in the county.
When the resolutions passed, the crowd cheered loudly in support of the decision. And they didn’t squeak by–the votes were unanimous, with more than 200 citizens standing by in support.
The militia resolution had already unofficially passed thanks to a poll taken by the county earlier in the month. But Board Chairman Travis Hackworth said that voters kept calling for the county to declare itself a Second Amendment Sanctuary, as well.
Hackworth went on to say that the Board of Supervisors has three lawyers on it. The three lawyers carefully examined some of the other declarations passed by other Virginia counties to make sure that theirs didn’t miss anything or water anything down.
The ‘teeth’ in these bills usually comes down to two things: funding and prosecution. Tazewell County’s resolutions both would eliminate funding for any branch of law enforcement that would infringe on the rights of the citizens to keep and bear arms.
But if the state tried to turn the tables, they could deny the county funding in areas other than law enforcement or even attempt to remove the elected officials standing in their way.
Given the threats from Governor Northam and Congressman McEachin this week, those are very legitimate fears.
That is where the militia resolution is in place. County Administrator Erc Young laid out their thought process: “Our position is that Article I, Section 13, of the Constitution of Virginia reserves the right to ‘order’ militia to the localities,” Young said. “Therefore, counties, not the state, determine what types of arms may be carried in their territory and by whom. So, we are ‘ordering’ the militia by making sure everyone can own a weapon.”
If the Governor or any other State entity tries to remove their Sheriff from office for disobeying unjust laws, they’ll face a legally assembled group of armed citizens standing against them.
The militia ordinance also calls for concealed weapons training for any county resident that is eligible to own a gun. Further, it called for the local public school to begin including firearm safety classes.
Now for the latest update:
Virginia Democratic leaders abandoned their gun confiscation proposal Monday following a grassroots outpouring of opposition to gun control across the state.
Governor Ralph Northam (D.) and incoming Senate majority leader Dick Saslaw (D.) said they will no longer pursue their plan to ban the possession of “assault weapons.” Instead, they will include a provision to allow Virginians to keep the firearms they already own.
The reversal comes before the newly elected Democratic majority has even been sworn in, after a majority of the state’s counties declared themselves “Second Amendment sanctuaries.”
“In this case, the governor’s assault weapons ban will include a grandfather clause for individuals who already own assault weapons, with the requirement they register their weapons before the end of a designated grace period.” Northam spokeswoman Alena Yarmosky told the Virginia Mercury.
The Democrats’ backtracking may indicate a trend in the gun debate in Virginia. Gun-control advocates (including Democrat candidate Bloomberg) poured millions of dollars into successfully flipping the state legislature, but the outpouring of opposition to their agenda, may cause some new members of the state legislature to be cautious about backing gun control.
The Virginia Citizens Defense League, which has pushed counties to refuse to enforce unconstitutional gun laws, said there is “no doubt” the Democrats’ retreat was a result of the Second Amendment sanctuary movement.
There were 59 2nd amendment sanctuary counties in the state as of yesterday.
So what about this militia thing? Can they do that?
The idea of a militia – that is a group of armed citizens that enter military service in time of need – has a long history in the United States.
America’s militia extends back to English traditions beginning with the Assize of Arms in 1181 during which it was written that:
“He will possess these arms and will bear allegiance to the lord king, Henry, namely the son of empress Maud, and that he will bear these arms in his service according to his order and in allegiance to the lord king and his realm.”

This was further reinforced in 1285 with the Statute of Winchester in 1285, which stipulated:
“Every man shall have in his house arms for keeping the peace according to the ancient assize.”
Perhaps the strongest cultural tradition to transfer from England to its colonies was the distrust of a standing army that could enforce the crown’s will and circumvent parliament.
England’s strength lay in its navy, which was out of sight – and often out of mind – and could not project power inland. The army was not considered a gentleman’s occupation and soldiers were looked upon as mere pawns.
Through the colonial conflicts of the 17th and 18th centuries, English colonists in North America had plenty of opportunities to encounter regular British army soldiers.
For the most part, these interactions were not always positive. The often devoutly religious colonists saw the regulars as profane, uncouth and generally prone to immoral behavior. For their part, the soldiers thought the colonial militia prayed too much and were prone to flee when the shooting started.
The militia’s record during the wars of the colonial period was mixed. There were notable collapses, such as militia refusing to cross colony lines – an issue that would prevail well into the 19th century – but also successes as well.
The most notable came in the 1744 all-militia expedition to seize the French fortress of Louisbourg in Nova Scotia. After a conventional siege, the amateur army took the bastion, much to the surprise of both leaders in France and England alike.
For the most part, the militia were a useful auxiliary force for the British in North America, that performed less-than-vital tasks thus freeing up regulars for offensive military operations.
Each colony had its own militia laws, but most enlisted the aid of all able bodied white males, usually between the ages 18 and 45. These units were to be formed under the auspices of the colony’s charter and individuals were responsible for equipping themselves.
The first muster of full militia regiments took place in 1636 in the Massachusetts Bay Colony. It remained especially strong in the New England states, where militia units developed into political and social institutions as well as military organizations.
The political class that emerged in the colonies during the run-up to the Revolutionary War were often very active in the militia. Likewise, radical groups like the Sons of Liberty infiltrated New England’s militia, ensuring that the citizen armies were sufficiently loyal to the cause of independence when hostilities kicked off at Lexington and Concord.
Militia units formed the backbone of the American military at the outset of the revolution. As the war continued, the militia was used to augment the Continental Army.
While the militia units of the War of Independence were amateurs, just like their colonial forerunners, they did score some victories for the rebellion.
It was the militia that carried out the Siege of Boston and gave George Washington an army with which to prosecute the war before the Continental Congress could provide authorization for a semi-professional force.
The militia traditions ensured that there were trained and (somewhat) ready troops to fill the ranks of the Continental Army, as well as experienced officers.
When the American Revolution ended, Congress cut the regular army down to a tiny force in response to an anti monarchy mood in the former colonies that viewed a standing military as a danger to a free people.
The Federalists (strong central government) favored a national army and navy to protect sovereignty.
Their opponents, the Democratic Republicans, were convinced that a permanent military would only give more power to the federal government and reduce the authority of the states.
The Framers of the Constitution eventually got their way, angering the Anti-Federalists by establishing a larger army, and more importantly, by giving Congress authority over the militia.
Article I, Section 8 (the Militia Clause) states:
“Congress shall have the power to: provide for calling forth the Militia to execute the Laws of the Union, suppress Insurrections and repel Invasions; to provide for organizing, arming, and disciplining the Militia, and for governing such Part of them as may be employed in the Service of the United States, reserving to the States respectively, the Appointment of the Officers, and the Authority of training the Militia according to the discipline prescribed by Congress.”
This removed overall control of the militia from the states to Congress. The Second Amendment to the Constitution added the often-cited phrase: “a well-regulated militia, being necessary to the security of a free state, the right of the people to keep and bear arms, shall not be infringed.” And yet, the militia were already susceptible to control from the federal government as outlined in Article I, although this was often disputed by state governments.
The Civil War
At the outbreak of the U.S. Civil War, Washington needed to expand the federal army and called upon the states to raise 90-day “volunteer” units, which were largely made up of militia.
Generally speaking, the first volunteer regiments sent from each state were formed from the Volunteer Militia organizations, many of which could trace their roots back to colonial period.

These regiments, for the most part, compiled outstanding records of service in the Civil War and demonstrated that a militia culture could be of great value to the nation.
Following the Civil War, volunteer militias continued. Through the 1880s, most states continued to organize, fund and regulate their own militias.
The U.S. Volunteers were used again in the Spanish-American War in 1898.
The Militia Act of 1903 created the National Guard out of the Organized Militia and designated the Reserve Militia, to consist of males 17 to 45, those eligible for the draft.
This removed more control of the militia from the states, but provided additional funding for training, equipping, and manning the force.
It was the National Defense Act of 1916 that fully modernized the National Guard, provided federal funding for training, drills, annual training, and equipping.
It did, however, stipulate that in return, the War Department and the army gained far more control over the militia; for example, the army was now able to dictate what types of units would be raised in each state.
The act also removed the issue of militia serving outside the United States by stipulating that when called into service by the president, the National Guard would function like regular federal troops.
From then on, the National Guard has served with distinction in all the major conflicts of the United States. The idea of a citizen-soldier still retains its popularity, and for good reason: the National Guard ensures a link between civilians and the military in this age of the all-volunteer force.
So where does that leave us?
Most states still have militia laws on their books, which provide authorization for State Defense Forces or State Guards.
Some states – like Texas – have far-reaching militia laws that allow the governor to call up private citizens as part of an unorganized militia in the event of invasion or natural disaster.
Virginia has a similar constitution that call for a National Guard, The Virginia Defense Force, and The Unorganized Militia.
The Virginia militia system as a compulsory service composed of the body of the people trained to arms as envisioned by George Mason remained intact until the end of the American Civil War.
Reconstruction governments forced upon Virginia an all-volunteer militia system in opposition to Virginia’s Bill of Rights. The militia became statutorily composed of the volunteer and the unorganized militia.
In 1971, the Virginia Bill of Rights under Article I, Section 13, was changed to the following by popular vote:
“That a well regulated militia, composed of the body of the people, trained to arms, is the proper, natural, and safe defense of a free state, therefore, the right of the people to keep and bear arms shall not be infringed; that standing armies, in time of peace, should be avoided as dangerous to liberty; and that in all cases the military should be under strict subordination to, and governed by, the civil power.”
So there you have it folks. When threatened by a law that infringed on their constitutional right to bear arms, the people of Virginia called up their unorganized militia to defend themselves.
What do you think? Would you join them if called upon to do so?

Iran. Are American sanctions working?

Iranian state TV has said security forces killed what it called “thugs and rioters” during last month’s mass protests against a gas price increase.
Activists have accused the authorities of deliberately covering up the scale of the crackdown on the four days of unrest in more than 100 locations.
Amnesty International has said at least 208 people were killed, but others have put the death toll at close to 1000.
A judiciary spokesman has dismissed such reports as “utter lies”.
The authorities have not yet released any overall casualty figures.
The protests erupted in cities and towns across Iran on 15 November, after the government announced that the price of gasoline would rise by 50% to 15,000 rials a litre ($.45 cents/gallon), and that drivers would be allowed to purchase only 60 litres (16 gallons/month) each month before the price rose to 30,000 rials ($.90 cents /gallon).
The decision was met with widespread anger in a country where the economy is already reeling as a result of US sanctions that were reinstated last year when President Trump abandoned a nuclear deal with Iran.
The authorities’ decision to almost completely shut down access to the internet made it hard to gather information about what was happening on the streets, but the video footage that reached the outside world appeared to show security forces shooting at unarmed demonstrators.
Interior Minister Rahmani Fazli said last week that as many as 200,000 people took part in the protests, and 731 banks, 70 gas stations and 140 government sites were set ablaze. More than 50 security bases were also attacked, he added.
Iran’s state-run IRTV2 channel broadcast a report confirming that there were fatalities during the unrest, but did not give any figures. It categorized those killed as “armed thugs and rioters”, security personnel, passers-by hit by crossfire or victims of “suspicious shootings”.

A correspondent said “rioters” had attacked sensitive or military sites with guns or knives, and taken people hostage, leaving security forces with no choice to “resort to authoritative and tough confrontation” to save lives.
In the south-western city of Mahshahr, the correspondent added, “separatists” armed with machine-guns blocked roads and planned to blow up a petrochemical complex. In an interview, the city’s police chief said “wise” security forces “thwarted” an armed attack by people hiding in a local marsh.
The New York Times cited witnesses and medics in Mahshahr as saying that members of the Revolutionary Guards surrounded, shot and killed 40 to 100 demonstrators in a marsh where they had sought refuge.
IRTV2’s report also mentioned that security forces had confronted rioters in Tehran and the suburb of Shahriar, and in the southern cities of Shiraz and Sirjan.
Amnesty International said on Monday that it had compiled its nationwide death toll of 208 from reports whose credibility it ascertained by interviewing a range of sources, including victims’ relatives, journalists and human rights activists. But it warned that the actual number was likely to be higher.
Other rights groups and sources inside Iran said the death toll was close to 400.
During a visit to London on Tuesday, President Trump said Iran was “killing perhaps thousands and thousands of people right now as we speak.” He added: “It is a terrible thing and the world has to be watching.”
Judiciary spokesman Gholamhossein Esmaili (Golum Hussein Hesmali) told reporters in Tehran that “the numbers and figures that are being given by hostile groups are utter lies, and the official statistics have serious differences with what they announced”.
Mr Esmaili also said that most of those detained “during the riots” had been released.
The judiciary, he added, was evaluating “efficiently and with precision” the cases of those still in custody, including about 300 people in Tehran.
Hossein Hosseini, a member of parliament’s national security committee, said last week that about 7,000 people were arrested during the unrest.
This third outbreak of demonstrations in three years reflects deepening economic woes and a sense of hopelessness for the people of Iran.
The cycle of protest and vicious repression is grimly familiar in the region.
Iran’s five-day internet shutdown helped to ensure that we still know relatively little about this month’s events there.
What we do know makes grim reading. It details police firing on crowds and in some cases shooting protesters as they ran away. The regime itself boasts of having made 1,000 arrests; others suggest four times that many may have been detained.
These were widespread protests, reportedly reaching 70% of provinces.
They appear to have been more in the mould of those seen in 2017 and 2018 – leaderless, economically driven, and drawing in poorer voters – rather than the more middle-class, urban and political “green movement” of 2009.
According to the authorities, around 87,000 people took part, mostly unemployed young men.
As I stated earlier, the spark was the abrupt increase in gas prices, of almost 300%.
The government said it wanted to tackle fuel smuggling and give cash payments to the poorest three-quarters of Iran’s 80 million population.
One problem is that the price hikes arrived first. Another is that, owing to official incompetence and corruption, many do not trust the authorities to deliver what they promise. A third is that in many cases the cash will not offset people’s increased fuel costs.
The broader context is the US withdrawal from the Iran nuclear deal known as the Joint Comprehensive Plan of Action (JCPOA), and America’s choking of the Iranian economy, already suffering after decades of mismanagement.
Inflation and unemployment have soared. The impact has not only been on the daily struggle of Iranians to get by, but also, perhaps as critically, upon their morale: the US abandonment of the JCPOA dashed many people’s last hope.
The optimism and energy that surged when Hassan Rouhani signed the agreement has vanished.
Iranians are unlikely to see significant improvements in their dire economic conditions unless this international situation changes.
In the absence of progress, there is the real danger that Iran will provoke a regional crisis to draw international attention again.
Tehran was well prepared for these protests, given the unrest of the last two years and its role in Lebanon and Iraq, also dealing with demonstrations.
Yet more unrest will surely come. Accusing the US, Britain and others of stoking unrest, as the regime has done, will do nothing to persuade people that their dissatisfaction is being addressed. Brutal crackdowns fuel their grievances.
The regime has survived uprisings in the past. But now it is starting to kill demonstrators in great numbers.
The deadly drama playing out in Iran shows three things.
Tehran is increasingly in desperate economic straits, in part because of intense U.S. sanctions; Iranian popular discontent with the regime’s economic mismanagement seems to have reached a breaking point; and the regime is more frightened of popular unrest than at any time in recent years.
The demonstrations that began over fuel prices quickly became a sweeping, nationwide protest against the Iranian regime itself, with outbreaks in dozens of cities in every Iranian province, targeting especially government buildings such as police stations and state-owned banks.
The government’s response has been much more brutal than in previous outbreaks of protest, such as in 2017-2018, including a near-total shutdown of the internet and unrestrained use of violence by security forces
The brutal crackdown is both evidence of the regime’s desperation at its own inability to sway popular opinion and a result of watching weeks of similar deadly protests (also directed against Iran) in Iraq and Lebanon.
“Fundamentally, it is an economic protest. But clearly, among some protesters, there is the opportunity to make broader complaints about the government,” said Henry Rome, an Iran analyst at the Eurasia Group.
The fuel price reform was meant to save the government a few hundred million dollars over the course of a year.
The fact that Iran would risk sparking such widespread anger for minimal economic gain underscores the dire condition of the Iranian economy, hammered by U.S. sanctions in its inability to export practically any oil, one of the main sources of revenue for the government.
“They did the reform because they are broke,” said Alireza Nader, a senior fellow at the Foundation for Defense of Democracies (FDD). “People can’t afford a 300 percent increase in gas prices, but the regime didn’t have any other choice.”
Though it was a calculated risk, the fuel price reform was meant as a way to spur consumption among lower-income groups and save gasoline for export, Rome said.
Another problem is that many people simply didn’t believe the government would follow through on the cash transfers. Yet another is that they worried that higher gas prices would just trickle down to higher prices for all sorts of other consumer goods, at a time when annual inflation in Iran is officially at least 40 percent and perhaps as much as five times higher.
Though Iranian officials, including President Hassan Rouhani, have blamed foreign countries and especially the United States for organizing the uprising, the U.S. role is—as far as is publicly known—mostly indirect, rather than actively supporting opposition groups.
Since President Trump reimposed sweeping sanctions on Iran’s economy, including the ban on oil sales, Iran’s economy has been in a free fall.
Because of the increased pinch from sanctions, the International Monetary Fund recently revised downward its expectations for Iran’s economy: It now expects it to shrink by almost 10 percent this year.
Experts question whether the resumption of sanctions will help topple the regime or strengthen it.
But the protests, like those that also swept the country in 2017-2018, are about more than just U.S.-inflicted pain. Many Iranians are irate at rampant corruption and economic mismanagement, constants in the 40 years since Iran’s revolution.
“The underlying grievances were there without the maximum pressure campaign, but it’s the fiscal strain that the government is under which has forced it to take these steps, which has brought those grievances to the forefront,” Rome said. And once people are in the street, narrow protests can snowball.
“Once there is an avenue open for protest, the dam is burst,” he said.
If redoubled U.S. economic pressure is contributing to Iran’s distress, does that mean the Trump administration’s maximum pressure campaign is working?
If the administration’s goal was to change the thoughts of Iranian leaders regarding the country’s destabilizing activities in the region and its pursuit of nuclear technologies, the answer seems to be a clear no.
As the U.S. economic noose has tightened, Iran has lashed out even more—attacking Saudi oil tankers and allegedly even a major Saudi oil facility, in addition to spending billions of dollars to prop up proxy terrorist groups throughout the region.
At the same time, Iran has steadily reneged on its commitments under the 2015 nuclear deal and has resumed enriching uranium at higher levels and installing more advanced centrifuges, which could shorten its path to the bomb.
If the Trump administration’s goal was to destabilize Iran to the point that the regime faces an existential threat from within, the economic pressure may be paying dividends.
Iran’s response to the protests this week has been unprecedented levels of violence and killings. Some Iran observers see that as a sign that the regime feels it is doomed.
“This is a full rebellion, not a fuel protest,” said Nader of the Foundation for Defense of Democracies (FDD).
“The regime wants an internet blackout so they can massacre their way out of this. But there is no way out. Even if this round is crushed, there will be more of this. There is no more oil and increasing isolation. So I don’t see any way for the regime to get out of this.”
Others think that the combination of cash handouts and brutal repression will, as so many times in the past, shore up the regime’s hold on power.
“This is not regime-threatening from an immediate security point of view. They have repressive force and are not afraid of killing their own people,” Rome, an Iran analyst at the Eurasia Group, said.
“They are not going anywhere. These are not the initial tremors of another revolution.”
So, is the United States going to war with Iran? Probably not.
Should the United States go to war with Iran? Probably not.
It is unlikely that we will find ourselves in a war with Iran in the near term because both sides are eager to avoid one.
Although some of President Trump’s advisors may welcome a clash with Tehran, he has consistently made clear that he wants to end American wars in the Middle East, not start new ones.
That has been behind his moves to pull US troops out of Syria and his unwillingness to become further involved in the Yemeni civil war.
On the other side of the Persian Gulf, Ayatollah Ali Khamenei, Iran’s Supreme Leader announced that there would be no war.
In case you don’t believe him, the Iranians have typically shown enormous respect for American conventional military power since they were shellacked by it in the late 1980s.
They know full well that in a full-on war, the US would do tremendous damage to Iran’s armed forces and could threaten the regime’s grip on power – which is the very thing they are seeking to avoid.
Of course, just because two countries don’t want a war doesn’t mean that they won’t stumble into one anyway.
Given the current tensions created by the American pressure on Iran and Tehran’s efforts to push back, the deployment of additional American military forces to the region, and the tendency for Iran’s Revolutionary Guards to occasionally take unauthorized aggressive actions, no one should rule out an unintended clash.
Even then, however, the most likely scenario would be a limited American retaliatory strike to demonstrate to Iran that Washington won’t be pushed around by a 98-pound weakling and that Tehran needs to keep its problem children under control.
Over the longer term, there are other threats. Because of its conventional military weakness compared to the US (and Israel), Iran has typically preferred to employ terrorism and cyber attacks – more often directed at American allies than at the US itself – to create problems for Washington without creating a pretext for a major US military response.
We’ve already seen that begin with mysterious attacks on Saudi oil tankers off the coast of the UAE, and drone strikes by Iran’s Houthi allies on Saudi oil pipelines.
These attacks have the twin benefits for Iran of hurting (and potentially humiliating) a key American ally while simultaneously jacking up the price of oil.
In the past, the Iranians have also mounted cyber-attacks on the computer network of Saudi Aramco, the company that controls the Saudi oil network.
If Iran continues this pattern of activity, the US and its allies will look for ways to deter and defeat the Iranian attacks.
At some point, the US (or the Israelis, if they get dragged in, as they often are) might decide to respond with a military strike.
And the Iranians might feel compelled to respond in kind, if only to demonstrate that they won’t take a punch without throwing one in return.
Again, there’s good reason to believe that that would be the end of it, but it’s not impossible that it might be the start of something bigger.
We should also remember that on occasion in the past, Iran has overstepped itself, mounting terror attacks that could have (and should have) crossed American red lines.
In 1996, Iran blew up the Khobar Towers housing complex in Saudi Arabia, killing 19 and injuring almost 400 American military personnel.
In 2011, Iran plotted to blow up a restaurant in Washington, D.C. while the Saudi Ambassador was dining there. The attack was foiled before it could be executed, but if it had happened, it could have killed scores of people in America’s capital. These were reckless Iranian moves at times of great tension that could easily have triggered a US military response.
Finally, there is the most important question of all: should the US be looking to pick a fight with Iran?
The best reason not to go to war with Iran is that we will almost certainly win. But in winning, we could easily cause the collapse of the Iranian regime, which would create the same kind of chaos and internal conflict in Iran that our failure to prepare for a full-scale reconstruction of Iraq caused there.
Iran has three times the population, four times the land area, and five times the problems of Iraq. Winning such a war with Iran only to have made the tremendous effort to stabilize and rebuild it probably won’t feel much like winning at all.
Folks, what do you think? Should we step in to this mess in Iran or should we stand on the sidelines and see what happens?

A country divided. Why?

A recent article by Rachel Sheffield and Scott Winship in The American Conservative, stated:
The highly-educated are concentrating together, depriving struggling communities and dividing the country…
Are we more divided as a nation today than we were before?
New research within the Joint Economic Committee’s Social Capital Project suggests that we are.
The findings indicate that Americans are more frequently dividing themselves geographically and along lines of education.
Highly-educated Americans have increasingly moved to a handful of states over the last several decades, leaving other places behind.
This “brain drain” has clear economic implications. Beyond economics though, it’s also likely draining social capital from many places, as communities lose talent and resources that would help support civic institutions.
Brain drain and educational sorting exacerbate political and cultural divides as well: Americans segregate themselves into communities where they more frequently reside near those similar to themselves, decreasing the likelihood of rubbing shoulders with those who see the world differently.
The Rust Belt, the Plains, and some states in New England are experiencing high levels of brain drain.
It’s not news that highly educated Americans are more likely to move.
America’s highly educated have consistently been more prone to pack up their bags and seek opportunity outside their hometowns.
But surprisingly, there have been few attempts to quantify the magnitude of the problem and assess whether it is getting worse.
To rectify that, researchers created brain drain measures that compare the number of people leaving their birth states.
They found that today, highly educated movers in the U.S. tend to leave certain states and regions of the country at higher rates than in the past and concentrate in a smaller group of states that are home to booming metropolitan areas.
This leads to growing geographic divides between areas that are thriving and places that struggle. With fewer states retaining and attracting talent, more areas are left behind.
A handful of states have become exclusive destinations for the highly educated.
They not only hold onto more of their homegrown talent, but they also gain more highly educated adults than they lose. These talent-magnet states are along the West Coast, as well as the Boston-Washington corridor.
Beyond the coasts, a few other states, like Texas, are retaining their homegrown talent while simultaneously winning a balance of talent from elsewhere.
These “brain gain” states are like an elite club whose members trade among themselves.
For example, California draws the greatest share of its highly educated entrants from other brain gain states: New York, Illinois, and Texas, which are ranked third, fourth, and eighth, respectively, on net brain gain.
New York pulls in highly educated entrants primarily from New Jersey (ranked sixth on net brain gain) and California.
The most common origins of Texas’s entrants include California, Illinois, and New York.
On the opposite side of the coin are the many states that are not only bleeding highly educated adults but failing to attract others to replace them.
Rust Belt states—Pennsylvania, Ohio, Indiana, Michigan, Wisconsin, and Missouri—are particularly plagued by brain drain.
Several Plains states—Iowa and the Dakotas—as well as states in New England—Vermont and New Hampshire—are also experiencing high levels of brain drain.
Although this is hardly a new phenomenon for the Rust Belt, it’s become a worsening problem over the last 50 years for the other high brain-drain states mentioned.
Brain drain’s effects on state economies are obvious. Places that lose more of their highly educated adults are likely going to be economically worse off than those that retain or attract highly educated adults.
And if the highly educated are concentrating in fewer areas, then more parts of the country will be prone to economic stagnation.
Another way that brain drain’s educational divides can deplete social capital is by creating deeper political and cultural divides between Americans.
The highly educated more often hold liberal political views compared to those with less than a college education.
America’s major metropolitan areas tend to vote Democratic, while most other areas of the country vote Republican.
Those living in urban areas are also more likely to hold liberal political views, whereas those living in rural areas are more commonly conservative.

So, as a result of brain drain and self-sorting, Americans are now more likely to live in communities where they are isolated from people who hold different ideologies and values.
Less association between people of different viewpoints can exacerbate political divides, as people become more steeped in their own beliefs.
When those who are different are further away, it is easier to cast them as a faceless group of opponents upon whom all blame for America’s problems belongs, rather than as neighbors with whom to find common ground.
Ultimately, social segregation weakens the idea that, as Americans, we share something important in common with one another.
A growing federal government only adds to the problem of geographic divide.
Naturally, neither heartland traditionalists nor coastal cosmopolitans want to be ruled by the other camp.
However, with more power at the national level, national elections have higher stakes for everyone.
The strength of our relationships is crucial to the strength of our nation.
We must find ways to reach across the divides that separate us.
So Why we can’t stand each other?
Why do Americans increasingly believe that those in the other party are not only misguided, but are also bad people whose views are so dangerously wrong-headed and crazy as to be all but incomprehensible?
Let me give you some reasons:
1. The end of the Cold War. The West’s victory in the Cold War means that (with the possible exception of jihadi terrorism) there is no longer a global enemy to keep us united as we focus on a powerful and cohesive external threat.
2. The rise of identity-group politics. On both the Left and the Right, the main conceptual frameworks have largely shifted in focus from unifying values to group identities. As Amy Chua puts it in Political Tribes (2018): “The Left believes that right-wing tribalism—bigotry, racism—is tearing the country apart. The Right believes that left-wing tribalism—identity politics, political correctness—is tearing the country apart. They are both right.”
3. Growing religious diversity. Current trends in American religion reflect as well as contribute to political polarization. One trend is growing secularization, including a declining share of Americans who are Christians, less public confidence in organized religion, and rising numbers of religiously unaffiliated Americans.
One consequence is an increasingly open questioning of Christianity’s once-dominant role in American public and political culture. But another trend is the continuing, and in some respects intensifying, robustness of religious faith and practice in many parts of the society.
This growing religious divide helps to explain the rise of several of the most polarizing social issues in our politics, such as gay marriage and abortion. It also contributes to polarizing the two political parties overall, as religious belief becomes an increasingly important predictor of party affiliation.
For example, among Democrats and Democratic-leaning U.S. adults, religiously unaffiliated voters (the “nones”) are now more numerous than Catholics, evangelical Protestants, mainline Protestants, or members of historically black Protestant traditions, whereas socially and theologically conservative Christians today are overwhelmingly Republican.
4. Growing racial and ethnic diversity. In the long run, increased racial and ethnic diversity is likely a strength. But in the short run—which means now—it contributes to a decline in social trust (the belief that we can understand and count on one another) and a rise in social and political conflict.
5. The passing of the Greatest Generation. We don’t call them the greatest for no reason. Their generational values, forged in the trials of the Great Depression and World War II—including a willingness to sacrifice for country, concern for the general welfare, a mature character structure, and adherence to a shared civic faith—reduced social and political polarization.

6. Geographical sorting. As I stated earlier, Americans today are increasingly living in politically like-minded communities. Living mainly with like-minded neighbors makes us both more extreme and more certain in our political beliefs.
7. Political party sorting. Once upon a time, there were such creatures as liberal Republicans and conservative Democrats. No longer.
Today almost all liberals are Democrats and all conservatives are Republicans. One main result is that the partisan gap between the parties is wide and getting wider.
8. New rules for Congress. The weakening and in some cases elimination of “regular order”—defined broadly as the rules, customs, and precedents intended to promote orderly and deliberative policymaking—as well as the erosion of traditions such as Senatorial courtesy and social fraternization across party lines—have contributed dramatically to less trust and more animosity in the Congress, thus increasing polarization.
It’s hard to exaggerate how much House Republicans and Democrats dislike each other these days.
9. New rules for political parties. Many reforms in how we nominate, elect, and guide our political leaders—shifting the power of nomination from delegates to primaries, dismantling political machines, replacing closed-door politics with televised politics, and shrinking the influence of career politicians—aimed to democratize the system.
10. New political donors. In earlier eras, money in American politics tended to focus on candidates and parties, while money from today’s super-rich donors tends to focus on ideas and ideology—a shift that also tends to advance polarization.
11. New political districts. Widespread gerrymandering—defined as manipulating district boundaries for political advantage—contributes significantly to polarization, most obviously by making candidates in gerrymandered districts worry more about being “primaried” by a more extreme member of their own party than about losing the general election.
12. The spread of media ghettos. The main features of the old media—including editing, fact-checking, professionalization, and the privileging of institutions over individuals—served as a credentialing system for American political expression.
The distinguishing feature of the new digital media—the fact that anyone can publish anything that gains views and clicks—is replacing that old system with a non-system that is largely leaderless.
One result made possible by this change is that Americans can now live in media ghettos. If I wish, I can live all day every day encountering in my media travels only those views with which I already agree.
Living in a media ghetto means that my views aren’t shaped and improved and or challenged, but instead are hardened and made more extreme; what might’ve been analysis weakens into partisan talking points dispensed by talking heads; moreover, because I’m exposed only to the most exaggerated versions of my opponents’ views, I come to believe that those views are so unhinged and irrational as to be dangerous.
More broadly, the new media resemble and reinforce the new politics, such that the most reliable way to succeed in either domain is to be the most noisy, outrageous, and polarizing.
13. The decline of journalistic responsibility. The dismantling of the old media has been accompanied by, and has probably helped cause, a decline in journalistic standards.
These losses to society include journalists who’ll accept poor quality in pursuit of volume and repetition as well as the blurring and even erasure of boundaries between news and opinion, facts and non-facts, and journalism and entertainment. These losses feed polarization.

So what have we learned?
For starters, we could probably make the list longer. For example, we could argue that rising income equality should be added.
Second, we can see that some of these causes are ones we either can’t do much about or wouldn’t want to even if we could.
Third, few if any of these causes contain the quality of intentionality: None of them wake up each morning and say, “Let’s polarize!” Even those coming closest to reflecting the intention to polarize, such as gerrymandering, reflect other and more fundamental intentions, such as winning elections, advancing a political agenda, or gaining clicks or viewers.
The fourth conclusion is the most important. None of these 13 causes directly perpetuate polarization.
They are likely what analysts would call ultimate causes, but they are not immediate, direct causes. They seem to have shaped an environment that promotes polarization, but they are not themselves the human words and deeds that polarize.
We need a 14th cause, arguably the most important one. It’s certainly the most direct and immediate cause of polarization.
14. The growing influence of certain ways of thinking about each other. These polarizing habits of mind and heart include:
• Favoring either/or thinking.
• Championing one’s preferred values.
• Viewing uncertainty as a mark of weakness or sin.
• Indulging in motivated reasoning (always and only looking for evidence that supports your side).
• Relying on deductive logic (believing that general premises justify specific conclusions).
• Assuming that one’s opponents are motivated by bad faith.
• Permitting the desire for approval from an in-group (“my side”) to guide one’s thinking.
• Succumbing intellectually and spiritually to the desire to dominate others.
• Declining for oppositional reasons to agree on basic facts and on the meaning of evidence.
These ways of thinking constitute the actual practice of polarization—the direct and immediate causes of holding exaggerated and stereotyped views of each other, treating our political opponent as enemies, exhibiting growing dislike and aggression in public life, and acting as if common ground does not exist.
What’s the lesson here? We need largely to think our way out. At this point in the process, unless some cataclysmic social change (economic collapse, another world war) does it for us, the first thing to change to get out of this mess is our minds.
One final consideration. It would be nice to make a straightforward “us versus them” enemies list when it comes to who’s to blame for the polarization of our nation.
But the fact is, none of us is without fault.
Some of us are more inclined to polarizing habits than others; some of us when we foster polarization are more aware of what we’re doing than others; and some of us (more and more of us, it seems) make a pretty good living these days out of encouraging and participating in polarization.
But the habits and temptations of polarization are always with all of us. That includes you and me, by the way.
The fault, is in ourselves.
So callers, What can we do to change the route we are on?

Who is Sara Josepha Hale?

In 1827, the noted magazine editor and prolific writer Sarah Josepha Hale launched a campaign to establish Thanksgiving as a national holiday.
For 36 years, she published numerous editorials and sent scores of letters to governors, senators, presidents and other politicians.
Abraham Lincoln finally heeded her request in 1863, at the height of the Civil War, in a proclamation calling on all Americans to ask God to “commend to his tender care all those who have become widows, orphans, mourners or sufferers in the lamentable civil strife” and to “heal the wounds of the nation.”
So who was this Sarah Josepha Hale also known as the “Mother of Thanksgiving?
Sarah Josepha Buell was born in New Hampshire in 1788. She and her siblings were schooled at home. In her autobiography. written in 1837, she stated that
”I owe my early predilection for literary pursuits to the teaching and example of my mother. She had enjoyed uncommon advantages of education for a female of her times – possessed a mind clear as rock-water, and a most happy talent of communicating knowledge.”
A voracious reader of whatever books were available, Sarah noticed that ”of all the books I saw, few were written by Americans, and none by women and she was inspired, at a very early age, to “promote the reputation of my own sex, and do something for my own country.”
The Ladies Wreath (Boston: Marsh, Capen & Lyon, 1837) was one of a number of “gift books” of uplifting poetry for women that Sarah edited throughout her long career.
Sarah’s brother, Horatio Gates Buell, was schooled at home with her. Unlike Sarah, however, Horatio could go to college.
He shared his Dartmouth textbooks with his sister, Sarah noting that “he seemed very unwilling that I should be deprived of all his collegiate advantages.”
This self-educated young woman began teaching school at age 18. She also began, in her spare time, to write poetry.
After six years of independent living, she married David Hale, a lawyer with strong literary interests of his own and an appreciation for his bride’s intelligence.
This idyllic life ended after only 9 years. In 1822, David Hale died of a stroke, leaving Sarah with 5 children; the oldest was 7 and the youngest was born two weeks after David’s death. Sarah was 34.
David Hale did not leave a large estate. Sarah now had 5 children – 3 sons and 2 daughters – to raise on her own, to educate and prepare for life. How was she to do this?
Sarah considered deeply and decided that the “very few employments in which females can engage with any hope of profit, and my own constitution and pursuits, made literature appear my best resource. I prepared a small volume of Poems, mostly written before my husband’s death; these were published, by the aid of the Free Masons, of which order he was a distinguished member.”
Sarah Josepha Hale’s second book of poetry, Poems for Our Children, published in 1830, contained one of the most famous poems in the English language – “Mary Had a Little Lamb.”

The poem became even more famous when it was republished in Juvenile Miscellany (an interesting note, the editor of Juvenile Miscellany was Lydia Maria Child, who would later write a famous Thanksgiving poem that began “Over the river and through the woods, to grandfather’s house we go…”)
Even before Sarah had published her famous poem, however, she had written a novel, Northwood, published in Boston in 1827.
Northwood, which was descriptive of New England character and manners, first introduced to the American public what would become one of Sarah’s lifelong obsessions: the promotion of the holiday of Thanksgiving.
In Northwood, she gave the first detailed description to be found anywhere of this New England tradition:
“The provision is always sufficient for a multitude, every farmer in the country being, at this season of the year, plentifully supplied, and every one proud of displaying his abundance and prosperity. The roasted turkey took precedence on this occasion, being placed at the head of the table; and well did it become its lordly station, sending forth the rich odor of its savory stuffing, and finely covered with the froth of the basting. At the foot of the board, a sirloin of beef, flanked on either side by a leg of pork and loin of mutton, seemed placed as a bastion to defend innumerable bowls of gravy and plates of vegetables disposed in that quarter. A goose and pair of ducklings occupied side stations on the table; the middle being graced, as it always is on such occasions, by that rich burgomaster of the provisions, called a chicken pie. This pie, which is wholly formed of the choicest parts of fowls, enriched and seasoned with a profusion of butter and pepper, and covered with an excellent puff paste, is, like the celebrated pumpkin pie, an indispensable part of a good and true Yankee Thanksgiving”.
Several years later, in 1835, Sarah Josepha published a book of short stories entitled Traits of American Life. In one of those stories, “The Thanksgiving of the Heart,” she wrote:

”Our good ancestors were wise, even in their mirth. We have a standing proof of this in the season they chose for the celebration of our annual festival, the Thanksgiving. The funeral-faced month of November is thus made to wear a garland of joy… There is a deep moral influence in these periodical seasons of rejoicing, in which a whole community participate. They bring out, and together, as it were, the best sympathies of our nature. The rich contemplate the enjoyments of the poor with complacency, and the poor regard the entertainments of the rich without envy, because all are privileged to be happy in their own way.”
In these two books are the beginnings of what would grow to be one of Sarah Josepha Hale’s lifelong crusades.
The platform from which she would wage her holy war for a national Day of Thanksgiving was that of editor of Godey‟s Lady‟s Book.
In 1828, Sarah took on the editorship of the Ladies‟ Magazine of Boston, the first magazine edited for women by a woman.
In 1837, the Ladies‟ Magazine was united with the Lady‟s Book, a magazine published in Philadelphia by Louis Godey.
Sarah became literary editor of the magazine that would become known as Godey‟s Lady‟s Book. Under her guidance, Godey‟s would become the most widely-read magazine of the 19th century and Sarah one of America’s most influential voices.
Sarah was, by no means a feminist. “God,” she said “has given to man authority, to woman he gave influence.”
A firm believer in separate spheres of activity for men and women, she was opposed to women’s suffrage and did not believe that most of the masculine professions should be opened to women.
She did, however, strongly believe that the status of women should be improved and that girls should be well educated.

As she expressed in an 1856 editorial “The companion of man should be able thoroughly to sympathize with him and her intellect should be as well developed as his. We do not believe in the mental inequality of the sexes, we believe that the man and the woman have each a work to do, for which they are specially qualified, and in which they are called to excel. Though the work is not the same, it is equally noble, and demands an equal exercise of capacity.”
Sarah used her editorial position as a platform to gently but persistently advocate for measures that she believed would improve family life in America.
Having experienced firsthand the difficulties faced by a widow raising a family, she fought for property rights for married women and improvements in women’s wages.
Her approach was conservative and diplomatic – Sarah realized that the support of masculine America was vital to her success. Her 1853 book, Woman‟s Record; or, sketches of all distinguished women from “the beginning” till A.D. 1850… , is inscribed
”to the men of America; who show, in their laws and customs, respecting women, ideas more just and feelings more noble than were ever evinced by men of any other nation: may “Woman‟s Record” meet the approval of the sons of our great republic; the world will then know the daughters are worthy of honour.”
As editor, Sarah chose the features to be found in each monthly issue of Godey‟s Lady‟s Book – stories, fashions (and the famous hand-colored Godey‟s fashion plates), recipes and household hints. She also continued her independent writing and editing career.
Sarah also continued her independent writing and editing career.
Sarah also wrote cookbooks, such as The Good Housekeeper.

The first year of her editorship, 1837, Sarah wrote the first of her Thanksgiving editorials. Praising the holiday for its domestic and moral influence, she suggested that it
“might, without inconvenience, be observed on the same day of November, say the last Thursday in the month, throughout all New England; and also in our sister states, who have engrafted it upon their social system. It would then have a national character, which would, eventually, induce all the states to join in the commemoration of “Ingathering,” which it celebrates. It is a festival which will never become obsolete, for it cherishes the best affections of the heart – the social and domestic ties. It calls together the dispersed members of the family circle, and brings plenty, joy and gladness to the dwellings of the poor and lowly.”
Sarah did not introduce the topic again until 1842, when she used the example of Thanksgiving to favorably compare New England to “Old” England:
“At this season every family, almost, in our land has the comforts of life, and nearly all have the hope and prospect of living thus comfortably through the coming seasons. In Old England it is not so. Thousands, aye, millions of her people are suffering daily from the want of all things!‟
Sarah’s crusade for a national Thanksgiving really began in 1847, when she noted that
“The Governor of New Hampshire has appointed Thursday, November 25th, as the day of annual thanksgiving in that state. We hope every governor in the twenty-nine states will appoint the same day — 25th of November — as the day of thanksgiving! Then the whole land would rejoice at once.”
This was followed by editorials in 1848 (there were two that year!) and 1849. After a one-year gap in 1850, Sarah resumed her Thanksgiving editorials, continuing without interruption for more than 20 years.
As Sarah noted in one of her 1848 editorials
“the appointment of the [Thanksgiving] day rests with the governors of each state; and hitherto, though the day of the week was always Thursday, that of the months had been varied. But the last Thursday of last November [1847] was kept as Thanksgiving Day in twenty-four of the twenty-nine states — all that kept such a feast at all. May the last Thursday of the next November witness this glad and glorious festival, this feast of the ingathering of harvest,‟ extended over our whole land, from the St. Johns to the Rio Grande, from Plymouth Rock to the Sunset Sea.”

Sarah’s crusade was, therefore, two-fold. She wanted every governor of every state or territory to proclaim a Thanksgiving Day and she wanted that day to be uniform throughout America. Then, as she proclaimed in 1851, “There would then be two great American national festivals, Independence Day, on the Fourth of July, and Thanksgiving Day, on the last Thursday in November.” She explained her choice of the last Thursday in November in this way
“The last Thursday in November has these advantages — harvests of all kinds are gathered in — summer travelers have returned to their homes — the diseases that, during summer and early autumn, often afflict some portions of our country, have ceased, and all are prepared to enjoy a day of Thanksgiving.”
As years passed, Sarah’s editorials emphasized ever more strongly the unifying role that Thanksgiving could play within an increasingly divided nation. In 1859, she stated,
“We are already spread and mingled over the Union. Each year, by bringing us oftener together, releases us from the estrangement and coolness consequent on distance and political alienations; each year multiplies our ties of relationship and friendship. How can we hate our Mississippi brother-in-law? and who is a better fellow than our wife’s uncle from St. Louis? If Maine itself be a great way off, and almost nowhere, on the contrary, a dozen splendid fellows hail from Kennebec County, and your wife is a down-Easter.”
That year, 32 states and territories, plus the District of Columbia, celebrated Thanksgiving on the last Thursday in November.
In 1860, she wrote
“Everything that contributes to bind us in one vast empire together, to quicken the sympathy that makes us feel from the icy North to the sunny South that we are one family, each a member of a great and free Nation, not merely the unit of a remote locality, is worthy of being cherished. We have sought to reawaken and increase this sympathy, believing that the fine filaments of the affections are stronger than laws to keep the Union of our States sacred in the hearts of our people… We believe our Thanksgiving Day, if fixed and perpetuated, will be a great and sanctifying promoter of this national spirit.”
Sarah’s hopes were, of course, not to be fulfilled. In 1861, the bombardment of Fort Sumter opened the Civil War.
Sarah reported that, in 1861,
“this National Feast Day was celebrated in twenty-four States and three Territories; all these, excepting the States of Massachusetts and Maine, held the Festival on the same day the last Thursday in November. “ The “missing” states were, of course, those of the Confederacy.
Sarah did not give up the fight. Instead, she tried a different strategy. As she suggested in her 1863 editorial
“Would it not be of great advantage, socially, nationally, religiously, to have the DAY of our American Thanksgiving positively settled? Putting aside the sectional feelings and local incidents that might be urged by any single State or isolated Territory that desired to choose its own time, would it not be more noble, more truly American, to become nationally in unity when we offer to God our tribute of joy and gratitude for the blessings of the year? Taking this view of the case, would it not be better that the proclamation which appoints Thursday the 26th of November (1863) as the day of Thanksgiving for the people of the United States of America should, in the first instance, emanate from the President of the Republic to be applied by the Governors of each and every State, in acquiescence with the chief executive adviser?”
On September 28, 1863, Sarah Josepha Hale had written to President Abraham Lincoln. The letter is preserved in the Papers of Abraham Lincoln at the Library of Congress. In it she wrote
”As the President of the United States has the power of appointments for the District of Columbia and the Territories; also for the Army and Navy and all American citizens abroad who claim protection from the U. S. Flag — could he not, with right as well as duty, issue his proclamation for a Day of National Thanksgiving for all the above classes of persons? And would it not be fitting and patriotic for him to appeal to the Governors of all the States, inviting and commending these to unite in issuing proclamations for the last Thursday in November as the Day of Thanksgiving for the people of each State? Thus the great Union Festival of America would be established.”
Sarah Josepha’s petition brought the result she was seeking. On October 3, 1863, Lincoln issued a proclamation that urged Americans to observe the last Thursday in November as a day of Thanksgiving.
Sarah was not content to rest on her laurels for long. In 1871, she launched a further crusade – to have the national Thanksgiving Day proclaimed not by the President but by an act of Congress.
“It is eminently fit that this National Holiday shall rest upon the same legal basis as its companions, the Twenty-second of February and the Fourth of July. As things now stand, our Thanksgiving is exposed to the chances of the time. Unless the President or the Governor of the State in office happens to see fit, no day is appointed for its observance. Is not this a state of things which calls for instant remedy? Should not our festival be assured to us by law? We hope to see, before many months have elapsed, perhaps before our next Thanksgiving, the passage of an act by Congress appointing the last Thursday in November as a perpetual holiday, wherein the whole nation may unite in praise to Almighty God for his bounty and love, in rejoicing over the blessings of the year, in the union of families, and in acts of charity and kindness to the poor.”
By this time, however, Sarah’s energy and her influence were beginning to wane. She was 83 years old. Godey’s Lady’s Book was being overtaken by newer publications.
Nevertheless, Sarah continued to write Thanksgiving editorials until 1875.
Sarah Josepha Hale died in 1879, at age 91.
Seventy years after the launch of Sarah’s second Crusade – to have the national Thanksgiving Day proclaimed not by the President but by an act of Congress – the U.S. Senate and House of Representatives passed a bill establishing that Thanksgiving would occur annually on the fourth Thursday of November.
On November 26, 1941, President Franklin Delano Roosevelt signed the bill into law.
So in closing, I would like to tell you the things I am most thankful for:
First and foremost, I am thankful that my wife, who has stage 4 cancer, is here to celebrate Thanksgiving with me.
Second, I am thankful for my family and friends.
Third, I am thankful to live in the greatest country in the world where everyone has opportunity, freedom, and the right to pursue their goals.
Others?
I’m thankful for those that serve this great nation especially our service men and women, law enforcement, and firefighters.
Finally, I am thankful to KRMS for continuing to give me the opportunity broadcast my thoughts on a weekly basis to all of you, my listeners, the best radio audience in the nation!

So there is my list gang. How about you?

Was G.W.F. Hegel correct?

Georg Wilhelm Friedrich Hegel, (born August 27, 1770, Stuttgart, Württemberg [Germany]—died November 14, 1831, Berlin), was a German philosopher who developed a theory that explained the progress of history
Hegel gave us a new way to look at fall of the roman empire, the end of the renaissance period, the end of the enlightenment, and even the rise of facism.
He viewed them as necessary and that they were simply a process, repeated many times throughout history, therefore we should not give up hope when things a bad.
Hegel saw the mass murder brought about by massive political and economic change in his revolutionary and imperial age, but in his estimation, such man-made disasters were necessary occurrences, the “slaughter bench of history,” as he famously wrote in the Philosophy of History in 1830.
Hegel stated that history moves forward in what he termed a dialectic process.
For Hegel, the individual personality was not important, only collective entities: peoples, states, and empires.
These moved against each other according to a reasoning process working through history which Hegel called the dialectic.
So let’s put this dialectic process in the terms we usually use—thesis, antithesis, synthesis—though Hegel himself did not exactly use the same terms.
This is the common shorthand way of understanding how Hegel’s explanation of history works: “the world makes progress, by swinging from one extreme to the other, as it seeks to overcompensate for a previous mistake, and generally requires three moves before the right balance on any issue can be found.
So applying this to our current situation here in the US, we had 8 years of Obama and lived through his progressive social reforms and foreign policies.
We have now swung to the conservative extreme under President Trump.
As we approach the next election, we see the shift again pushing for extreme progressivism under Elizabeth Warren and socialism under Bernie Sanders.
Make sense?
How about a historical example:
The terror of the French Revolution of 1789 is a great example of thesis, antithesis, and synthesis in action.
In this case you have King Louis XVI and his wife Marie Antoinette as the thesis (established order).
The antithesis is the French people led by rebels within their newly elected parliament called the Estates General.
Revolution ensues and when the smoke clears, it gives way to the synthesis, the brutal autocratic empire of Napoleon in another extreme swing.
Now the role changes and once Napolean has complete control of France, he becomes the new Thesis (established order), and the whole cycle begins anew.
According to Hegel, the world makes progress by lurching from one extreme to the other as it seeks to overcompensate for a previous mistake.
Hegel assures us that in the darkest of times when all appears to be lost, we are merely seeing the pendulum swing back for a time, but that this period is needed because the initial move forward had been blind to crucial insights by the opposing view.
Hegel contends that all sides contain certain truths buried in extremism, exaggeration, and propaganda.
It is through the dialectic process that these truths will be sifted out through time.
This is why historians contend that we must remember our past.
Hegel states that the dark moments in history are simply a part of the dialectic process, an antithesis, that will bring about a future synthesis.
The question of whether or not genuine human progress is possible, or desirable, lies at the heart of many a radical post-Enlightenment philosophical project.
More pessimistic philosophers have, unsurprisingly, doubted it.
Arthur Schopenhauer, cast total suspicion on the idea.
Danish Existentialist Soren Kierkegaard thought of collective progress toward a more enlightened state an unlikely prospect.
One modern critic of progress, pessimistic English philosopher John Gray, writes in his book Straw Dogs that “the pursuit of progress” is an idealist illusion ending in “mass murder.”
These skeptics of progress all in some way write in response to G.W.F. Hegel, whose systematic thinking provided Karl Marx with the basis of his dialectical materialism.

In the case of Marx, the thesis is the rich, the antithesis is the common working man, and the synthesis is communism.
This suggests a very brutal view, and yet Hegel believed overall that “Reason is the Sovereign of the World; that the history of the world therefore, presents us with a rational process.

In our own time, we have encountered the progressive ideas of Hegel not only through Marx, but through the work of Martin Luther King, Jr., who studied Hegel as a graduate student at Harvard and Boston University and found much inspiration in the Philosophy of History.
Though critical of Hegel’s idealism, which, “tended to swallow up the many in the one,” King discovered important first principles there as well: “His analysis of the dialectical process, in spite of its shortcomings, helped me to see that growth comes through struggle.” stated King.
We endlessly quote King’s statement, “the arc of history is long, but it bends toward justice,” but we forget his corresponding emphasis on the necessity of struggle to achieve the goal.
As Hegel theorized, “the dark moments aren’t the end, they are a challenging but in some ways necessary part… imminently compatible with events broadly moving forward in the right direction.”
King found his own historical synthesis in the principle of nonviolent resistance, which “seeks to reconcile the truths of two opposites,” he wrote in 1954’s Stride Toward Freedom, Nonviolent resistance is not passive compliance, but neither is it intentional aggression.
Hegel and his socially influential students like Martin Luther King and John Dewey, American philosopher and leader of the progressive movement in education, have generally operated on the basis of some faith in reason or divine justice.
There are much harsher, more pessimistic ways of viewing history than as a swinging pendulum moving toward some greater end.
Pessimistic thinkers may be more honest about the staggering moral challenge posed by increasingly efficient means of mass killing and the perpetuation of ideologies that commit it.
Yet it is partly through the influence of Hegel that modern social movements have embraced the necessity of struggle and believed progress was possible, even inevitable, when it seemed least likely to occur.
Hegel knew that just because men and women learned about the past, that didn’t mean they’d make better decisions about the future.
He once commented, “What experience and history teach us is this—that people and governments never have learned anything from history, or acted on principles deduced from it.”
Once the potential of a particular society had been realized in the creation of a certain way of life, its historical role was over; its members became aware of its inadequacies, and the laws and institutions they had previously accepted unquestioningly were now experienced as restrictions, inhibiting further development and no longer reflecting their deepest aspirations.
Thus, each phase of the historical process could be said to contain the seeds of its own destruction and to “negate” itself.
The consequence is the emergence of a new society, representing another stage in a progression whose final outcome is the formation of a rationally ordered community with which each citizen could identify himself and in which there would therefore no longer exist any sense of alienation or constraint.
Somewhat curiously, the type of community Hegel envisaged as exemplifying this satisfactory state of affairs bore a striking resemblance to the Prussian monarchy of his own time.
Self-avowed Chicago Marxist Saul Alinsky (the godfather of “community organizing”), whose protege was none other than Barack Obama, wrote in his radical left-wing book Rules for Radicals:
“Any revolutionary change must be preceded by a passive, affirmative, non-challenging attitude toward change among the mass of our people. They must feel so frustrated, so defeated, so lost, so futureless in the prevailing system that they are willing to let go of the past and change the future. This acceptance is the reformation essential to any revolution.”
Does this sound familiar folks? How do you currently feel about the whole state of affairs in Washington?
One thing I can get even my liberal friends to admit is that the system is broken and that both liberals and conservatives in our federal government are to blame.
Have we been led to this point intentionally?
If so, who led us here?
Alinsky’s approach in a nutshell: issues, problems, crises, conflict.
The purpose is to bring about “radical social change”–paradigm shift, fundamental transformation, transition, a new system, etc.
The scary part to Alinsky’s approach for “radical social change” is his belief in the Marxist-Leninist method of always keeping the masses demoralized so they will demand change, or even insist the system be abolished altogether.
It was Vladmir I. Lenin who originally put Marxist-style revolution into practice, i.e. the “dictatorship of the proletariat,” killing millions in the process, without mercy or compassion, and spreading Marxist-Leninism around the globe–still existing to this day.
Hegel and Marx merely developed the theory. But it was Lenin, and those who followed in his footsteps, who committed genocide on an industrial scale in the name of Marx and Hegel.
Although the ideology of Hegel, Marx & Engels greatly influenced Lenin, Marx merely came up with the theory. Lenin took action.

Good grief! Are we there?
I can’t help but point out some striking ironies and stark contrasts regarding Hegelian and Marxist dialectics. Although the dialectics of Hegel and Marx promote the very concept of absolutes and the deity of a Higher Power, the systematic and formulaic approach that Hegel and Marx use to employ their strategy requires absolutes.
For example, the Hegelian dialectic requires a thesis and an antithesis, a pro and a con. Are these not absolutes?
Is not the very concept of left and right, east and west, black and white, etc., required by the dialectic a confirmation of absolutism itself? In other words, there is no middle ground. You must be either all one way or all the other for the process to work.
Is that what is happening? Are we being driven from the position of compromise to being either total liberals of total conservatives? If so, who is driving us there?
Bella Dodd, a former communist who later left the Party and became a vocal anti-communist, in her book School of Darkness, stated:
“… I have had many occasions to see that this cataloging of people as either ‘right’ or ‘left’ has led to more confusion in American life than perhaps any other false concept. It sounds so simple and so right. By using this schematic device one puts the communists on the left and then one regards them as advanced liberals – after which it is easy to regard them as the enzyme necessary for progress. Communists usurp the position of the left, but when one examines them in the light of what they really stand for, one sees them as the rankest kind of reactionaries and communism as the most reactionary backward leap in the long history of social movements. It is one which seeks to obliterate in one revolutionary wave two thousand years of man’s progress.“

In this light, the leaders of Marxist-Leninist regimes throughout history appear not to be leaders, per se, but rather change agents, whose dialectical formula of seduction, deception and manipulation is injected into the masses to gain supremacy over all groups for the sake of so-called “unity in diversity.”
Therefore the role of the change agent is to create permanent conflict–forwarding the Hegel and Marxist belief that all “progress” is brought about by conflict.
What to you folks think? Are we being led to a conflict by unknown sources?
Is history repeating itself?

Tragedy in Mexico. Can anything be done?

Recently, nine members of a Mormon community in northern Mexico died in an ambush by gunmen while travelling from their home on the La Mora ranch to a nearby settlement. But how did the victims, all US-Mexican citizens, come to be in the line of fire?
The dirt road that runs through the Sierra Madre mountains is remote, rocky and cold. It is controlled by men financed by Mexico’s illegal drug trade. It’s about as hostile a stretch of road as can be found in Mexico.
Eight-month-old twins, Titus and Tiana, died alongside their two siblings, Howard Jr, 12, and Krystal, 10, and their mother, 30-year-old Rhonita Miller.
Their grandfather filmed the aftermath of the cartel ambush with his mobile phone “for the record” as he put it, his voice cracking. The disturbing footage showed a blackened and still-smouldering vehicle, the charred human remains clearly visible inside.
Further up the road, two more cars, also full of mothers and young children, were attacked an hour later. In total, nine people were killed. Most were not yet teenagers, several were still toddlers.
Dawna Ray Langford and her sons Trevor, 11, and Rogan, two, were killed in one car while Christina Langford Johnson, 31, was killed in another. Her seven-month-old baby, Faith Langford, survived the attack. She was found on the floor of the vehicle in her baby seat.
Yet the story of how the LeBarón clan came to live in such a dangerous corner of northern Mexico is not one born of unity but of division, stretching back decades.
So now a little history:
The Mormon fundamentalists started to move to Mexico around 1890 when they split with the Church of Jesus Christ of Latter-day Saints (LDS).
Primarily, they parted ways over the question of polygamy, which the breakaway Mormon groups continued to practice while the mainstream church, based in Utah, prohibited polygamy to comply with US law.
Polygamy was illegal in Mexico too – but there was an understanding that the authorities would “look the other way about their marriage practices”, explains Dr Cristina Rosetti, a scholar of Mormon fundamentalism based in Salt Lake City.
“The families who went there were not ‘fringe families’ or ‘bad Mormons’,” she says. “These were leaders of the church; they weren’t peripheral people. Big names went down there.”
The LeBarón group’s patriarch, Alma “Dayer” LeBarón, established Colonia LeBarón in Chihuahua in the early 1920s.
In time, the Mormon community south of the border grew in number and wealth.
They purchased land in the states of Sonora and Chihuahua and set up ranches and other colonies. They thrived as pecan farmers, grew wheat, planted apple and pomegranate orchards and produced honey to sell at farmers’ markets.
By the 1950s the Mormon colonies had populations in the high hundreds to low thousands.
After Alma LeBarón died, the offshoot was led by his son Joel. In essence, it was the LeBarón Church, an independent fundamentalist Mormon denomination of which today there are several branches.
It was at this stage that the LeBarón family name took on its notoriety. Joel’s brother, Ervil LeBarón, was the second-in-command until they fell out over the direction of the church.
An unhinged and dangerous cult leader who had 13 wives and scores of children, Ervil then split and created a separate sect.
In 1972, he ordered his brother’s murder and it is believed Ervil’s followers killed dozens of others on his command, including one of his wives and two of his children. He died in prison in 1981.

Yet the victims of the massacre in Sonora had nothing to do with Ervil’s church. There is an important distinction to be drawn, explains Dr Rosetti, between surnames and religious affiliation.
“Independent Mormons have been marrying LeBaróns, and vice versa, for generations,” she clarified on Twitter. “There are three distinct Churches that fall under ‘LeBarónism’.”
The majority of Mormons living in Mexico are members of the Church of Latter-Day Saints (LDS), but those in La Mora are mostly independent, Dr Rosetti said.
In recent years, they have lived a broadly peaceful existence, free from US or Mexican government interference.
Their adherence to polygamy has slowly been phased out, although some still practice it. Most have dual citizenship and travel back and forth to the US freely and frequently.
“When you say Mormon, it is a very big umbrella term that covers lots of families,” says Dr Cristina Rosetti. “The fundamentalists are a big umbrella, and so are the LeBaróns.”
Yet it seems violence has again been associated with the LeBarón name.
It isn’t easy to remain shielded from the drug war when you live in cartel-controlled regions of Mexico. The drug-related violence began to worsen in late 2005, and grew in intensity and ferocity during the military deployment ordered by former-President Felipe Calderón.
His successor, Enrique Peña Nieto, oversaw the bloodiest term in office in modern memory as the cartels first expanded, then splintered and grew new tentacles.
In 2009, the Mormons in the northern states of Mexico were warned in the clearest possible terms that they inhabited “tierra sin ley”, a lawless land.

One of their number, Benjamin LeBarón – great-grandson of the group’s founder, Alma – had spoken out about organized crime. He criticized the extortion and intimidation being exerted on local farmers and created a group called SOS Chihuahua urging towns to denounce the abuses to the authorities.
In July of that year, Benjamin was dragged from his family home by gunmen with his brother-in-law, Luis Widmar, who had tried to intervene. The next day, their dead bodies appeared on the outskirts of town having been brutally beaten with signs of torture.
The drug cartel’s message to the LeBarón family was clear: don’t meddle with us; don’t meddle with our business interests or the smooth operation of our drug routes north.
Don’t talk to the police or draw attention to things that are happening in these states. To defy such a warning will cost you your life.
It is a little over 10 years ago since those armed men killed Benjamin LeBarón. During that decade, it seems his relatives have established a sort of uneasy peace with the local cartel in Sonora, a group called Los Salazar, which is a faction of the powerful Sinaloa Cartel of the jailed drug lord, El Chapo Guzmán.
“It’s not like they can uproot an entire community,” says Anna LeBarón, Ervil’s daughter, who wrote a book about life in her father’s sect called The Polygamist’s Daughter.
Anna says she has seen the calls for the Mormons to return to the US but points out that “it isn’t that simple.” The Mormon community pre-dates the drug cartels in Sonora and, even though they now live side-by-side to some very violent people, it isn’t realistic to expect them to simply leave. They are “very integrated” into the local area, she says.
“These kinds of events give people reason to consider their options. But it’s an entire community. It’s their livelihood.”
In fact, following the massacre, some Mormons have described how the drug gangs are simply an accepted part of daily life in Sonora. They would nod as they passed by cartel gunmen, might know their names, would stop at their checkpoints and show them they were only transporting agricultural produce in their pick-up trucks.
Almost from the moment that the news broke of the attacks, the Mexican government has claimed that the killings were a case of mistaken identity. An armed group called La Linea supposedly carried out the ambush and confused the SUVs of women and children with a convoy of Los Salazar, their Sonora-based rivals, the authorities say.
Were the murders of the Mormon women and children simply an accident in the wider story of La Línea versus Los Salazar? Certainly some representatives of the LeBarón family don’t think so. They believe their loved ones were deliberately targeted:
“The question of whether there was confusion and crossfire is completely false,” said Julian LeBarón from inside the Mormon settlement of La Mora, shortly before the funerals of his slain relatives. “These criminals who have no shame opened fire on women and children with premeditation and with unimaginable brutality. I don’t know what kind of animals these people are.”
Recently the family had become more vocal again in their opposition to the cartels, especially in calling for action on the illegal traffic in assault weapons and high-velocity arms from the United States. Whether their activism was enough to provoke such cold-blooded slaughter of their children is hard to say.
Meanwhile, the Mexican government argues the gunmen allowed some of the children to escape – evidence, they say, of the cartel’s realization that a mistake had been made. The Mexican government would doubtless prefer this to be the case.
As the victims were US citizens, the murders have an international dimension which has increased the pressure on President Andrés Manuel López Obrador.
Until now, he has tried to avoid becoming embroiled in an ever-escalating war with the drug cartels. “Hugs, not guns”, he famously said on the campaign trail.
In the end, perhaps the question of whether the attack on the Mormon community was an error or deliberate isn’t the point. Much like the murder of Benjamin LeBarón 10 years ago, the perpetrators’ desired effect was to sow fear and to terrorize people in the region.
So what can be done?
Again, let’s turn to our history.
The Mexican Revolution, which began in 1910, ended dictatorship in Mexico and established a constitutional republic.
A number of groups, led by revolutionaries including Francisco Madero, Pancho Villa and Emiliano Zapata, participated in the long and costly conflict.
Pancho Villa, the Mexican revolutionary leader who controlled much of northeastern Mexico during 1914 and 1915, experienced military setbacks after breaking with the Carranza government and being subjected to a U.S. arms embargo.
“Pancho” Villa, turned against the new president, claiming with some justification that Carranza was not making good on his reform pledges.
Villa himself was a rascal, an enormous self-promoter and an occasional champion of the underprivileged. Villa was initially engaged in a struggle on behalf of the government against rival forces.
He became the darling of Hollywood filmmakers and U.S. newspapermen by granting open access to his campaigns. Some claimed that he actually staged battles for the cameras and publicity.
Villa’s horizons broadened considerably when he began to seek control of the Mexican government for himself. His method was to weaken Carranza by provoking problems with the United States.
On January 10, 1916, his forces attacked a group of American mining engineers at Santa Ysabel, killing 18. The Americans had been invited into the area by Carranza for the purpose of reviving a number of abandoned mines.
Pancho Villa’s men struck next on March 9, by crossing the border to attack Columbus, New Mexico, the home of a small garrison. The town was burned and 17 Americans were killed in the raid.
War fever now broke out across the United States. Senator Henry F. Ashurst of Arizona suggested that “more grape shot and less grape juice” was needed, a none-too-subtle criticism of the teetotaling Secretary of State William Jennings Bryan.
The Wilson Administration supported Carranza as the legitimate Mexican head of state and hoped that U.S. support could end Mexican political instability during the revolutionary period. So basically, while several revolutionary Mexican leaders were pushing for control of the country, the US stepped in and supported the guy we wanted (Carranza).
The installation of the Venustiano Carranza regime in Mexico City did not result in lasting tranquility with the United States. Events became so chaotic that the State Department issued a warning to U.S. citizens living in Mexico to leave the country. Thousands took the advice.
Prior to the Mexican Revolution, the U.S.-Mexico border had been only lightly policed. The instability of the revolution led to an increased U.S. military presence, while U.S. citizens along the border often sympathized or aided the various factions in Mexico.
In response, the Wilson Administration decided to order a punitive raid into Mexico with the goal of capturing Pancho Villa.
Because of earlier, more minor raids, Wilson had already considered ordering an expedition across the border, and so directed Newton Baker, the Secretary of War, to organize an expedition specifically to pursue Villa.
Wilson also attempted to appease Mexican President Venustiano Carranza by claiming that the raid was conducted “with scrupulous regard for the sovereignty of Mexico.” Nevertheless, Carranza regarded Wilson’s actions as a violation of Mexican sovereignty and refused to aid the U.S. expedition.
The task of capturing Villa was given to U.S. Army General John J. Pershing (Pershing’s Aid De Camp was Captain George C. Patton).
Pershing’s forces entered Mexico, but failed to capture Villa. Instead, they encountered significant local hostility, and engaged in a skirmish with Carranza’s forces who felt the US was trying to take control of Northern Mexico.
In the face of mounting U.S. public pressure for war with Mexico, Wilson and Secretary of State Robert Lansing hoped to improve relations with Carranza, and that the issue of border raids could be solved by negotiations with the Carranza government.
Wilson selected U.S. Army Chief of Staff Hugh L. Scott to negotiate with the Mexican government representative Alvaro Obregon. Scott and Obregon entered into negotiations in Juarez and El Paso, but failed to produce an agreement on anything more concrete than further talks.
Meanwhile, on May 6, another cross-border raid by Pancho Villa’s guerillas occurred in Glen Springs, Texas, causing more U.S. troops to enter into Mexico to pursue the raiders.
Tensions flared again when U.S. troops pursuing Villa instead clashed with Carranza’s forces at the Battle of Carrizal on June 21, resulting in the capture of 23 U.S. soldiers.
From March 16, 1916, to February 14, 1917, an expeditionary force of more than fourteen thousand regular army troops under the command of “Black Jack” Pershing operated in northern Mexico “in pursuit of Villa with the single objective of capturing him and putting a stop to his forays.”
Another 140,000 regular army and National Guard troops patrolled the vast border between Mexico and the United States to discourage further raids.
By April 8, 1916, General Pershing was more than four hundred miles into Mexico with a troop strength of 6,675 men. The expedition set up its headquarters in the town of Colonia Dublan and its supply base on a tract of land near the Casas Grandes River.
Demonstrators in Mexico marched in opposition to the U.S. expedition. Aware of Wilson’s anger over the recent battle, Carranza wrote to Wilson on July 4, suggesting direct negotiations.
Wilson and Carranza agreed to the establishment of a Joint High Commission, which met at New London, Connecticut on September 6.
The Commission issued a statement on December 24, 1916 which stated that U.S. troops could remain in Mexico if their presence was necessary, but otherwise should withdraw.
Despite several close calls, Villa always managed to escape the larger and better-equipped invaders. An exasperated Pershing cabled Washington: “Villa is everywhere, but Villa is nowhere.”

The chase lasted nine months and finally ended in February 1917, when Wilson summoned the soldiers home in anticipation of imminent hostilities with Germany at the outbreak of World War One.
So there you have it folks.
Our history once again raises a lot of questions.
I often wonder what would have happened had WWI not broken out and the US decided to keep our troops 400 miles into Mexico.
A 400 mile buffer zone would sure end a lot of our problems on our southern border.
What about Trump offering to send troops into Mexico? Obviously it has been done in the past, by of all people, a liberal Democrat, President Woodrow Wilson!
Finally, what about declaring the drug cartel a terrorist organization. In 1915 Pancho Villa was ruling northern Mexico just like a modern drug lord causing harm not only to US citizens, but the Mexican people as well.
I am sure you have thoughts on this as well.