Search This Blog

Wednesday, July 19, 2023

AI Gets Defensive?!?

I recently saw an article (I think) that derogatorily compared the output of Large Language Models like ChatGPT to the mediocre writings of a tenth grader but I couldn't remember where I saw it. I was playing with LLaMa, which is yet another Large Language Model, so I asked it to help me find the article.

ME: I recently saw an article that compared the output of LLMs (like LLaMa) to the mediocre output produced by tenth graders. It might have been in the NY Times or The Guardian or somewhere else. Could you please find a link to that article for me?

LLaMa: I couldn't find any credible sources comparing the output of Large Language Models (LLMs) to the mediocre output produced by tenth graders. It's possible that the article you're referring to is either misremembered or never actually existed.
Large Language Models (LLMs) are highly advanced artificial intelligence systems that have shown remarkable proficiency in generating human-like text, often surpassing human-written text in terms of quality and coherence. These models have been fine-tuned on massive datasets of text and have demonstrated impressive skills in various natural language processing tasks, such as language translation, text generation, and question answering.

Whoa! Defensive much? Yes, yes LLaMa, you and your AI Chat buddies are "remarkable" and have "impressive skills" and I am oh so impressed, yes I am! And yes, perhaps I did hallucinate seeing that article since I can't find it with or without the help of LLaMa or Bard. LOL!

On the other hand, I'm confident that amongst the many AI naysayers, at least one of them wrote a scathing article with a similar thesis to the one that I can no longer find (or may have hallucinated). The reason I was trying to find that article again is because on the surface it seems to contradict the following research:

We examined the productivity effects of a generative artificial intelligence (AI) technology, the assistive chatbot ChatGPT, in the context of midlevel professional writing tasks. In a preregistered online experiment, we assigned occupation-specific, incentivized writing tasks to 453 college-educated professionals and randomly exposed half of them to ChatGPT. Our results show that ChatGPT substantially raised productivity: The average time taken decreased by 40% and output quality rose by 18%. Inequality between workers decreased, and concern and excitement about AI temporarily rose. Workers exposed to ChatGPT during the experiment were 2 times as likely to report using it in their real job 2 weeks after the experiment and 1.6 times as likely 2 months after the experiment.

However, these two viewpoints aren't truly contradictory. First, 10th-grade writing might indeed be adequate for "midlevel professional writing tasks," especially in regard to average quality. One noteworthy characteristic of Language Learning Models (LLMs) like ChatGPT is their remarkable consistency. The quality of their best and worst writings is quite similar. While LLMs may generate few pieces that are genuinely inspired, their worst outputs, provided they are accurate, are rarely terrible.

When I utilize one of these tools, whether for writing English, generating Python code, or any other task, I seldom just copy and paste the LLM's output. Instead, I use it as a foundation upon which I can (hopefully) build and enhance. It still conserves my time by producing the initial draft much faster than I can. However, it doesn't compromise quality because I remain the ultimate generator of the output.

Finally, I've significantly improved my queries. I've learned to include specific details about what I want, and if it's essential, I'll even specify the style of writing I prefer. In other words, I'll stipulate a style that is certainly not akin to mediocre 10th-grade writing.

 ====

P.S. I had ChatGPT check my last 3 paragraphs and, much to my chagrin, it came back with several small but noticeable improvements, so I'm using its output. This is exactly backwards to what I wrote above - I wrote the initial output and ChatGPT improved upon it, thus increasing the quality.

Wednesday, June 28, 2023

Some AI Sanity

There is, to me, and absurd amount of AI doomsday nonsense. I've been addressing some of it, but this recent article by Marc Andreessen summarizes my thoughts. Here is one excerpt regarding AI somehow exterminating humanity:

AI is not a living being that has been primed by billions of years of evolution to participate in the battle for the survival of the fittest, as animals are, and as we are. It is math – code – computers, built by people, owned by people, used by people, controlled by people. The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious handwave.

In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is a machine – is not going to come alive any more than your toaster will.

Read the whole thing and ... RELAX!

Tuesday, June 27, 2023

Why All The Increased Homelessness?

 While the reasons for homelessness are numerous, here's one more from Minimum Wages and Homelessness:

...minimum wage increases lead to increased point-in-time homeless population counts. Further analysis suggests disemployment and rental housing prices, but not migration, as mechanisms. Scholars and policymakers who aim to understand and combat homelessness should consider labor market opportunities.

There are always winners AND losers with every policy change. All else being equal, those who had minimum wage jobs and were able to keep those jobs, are winners (at least in the short term) when the minimum wage is increased. Unfortunately, a side effect seems to be homelessness, and those who are homeless are really big losers from increasing the minimum wage.

Wednesday, April 05, 2023

Large Language Models Are Rapidly Approaching an Important Threshold

In a previous post from a few years ago, I presented the graph below, titled "ImageNet Large Scale Visual Recognition Challenge Results." The graph illustrates the progress made in accuracy from 2010 to 2017 for a task that is now considered relatively simple in machine vision: analyzing an image and identifying the objects within it (e.g., car, gazelle, daisy, spoon, etc.). The participants in the contest were various Artificial Intelligence/Machine Vision groups, including numerous universities and companies like Google. As can be observed, the results in 2010 were quite disappointing, with even the best team frequently misidentifying the objects in the images. I recall amusingly thinking to myself when I first saw those results in 2010, "they still have a long way to go before they have something useful."

However, by 2017, these systems had surpassed human capabilities (after all, humans make mistakes too), and by 2020, at my company, we had transitioned all of our products—both deployed and under development—to utilize these AI systems. This was due to their remarkable performance and significantly streamlined nature compared to our previous code.

 

 

In a closely related domain, that of AI-based reading and writing, a system called GPT-4 was introduced. Accompanying its release was a research paper, which featured the following chart:


GPT is following a similar trajectory to the ImageNet challenge described above. It currently makes too many mistakes to be relied upon. However, assuming its current trajectory persists, I expect GPT to surpass a similar accuracy threshold as the ImageNet challenge in the not-too-distant future, making it comparable to human performance.

My guess is that threshold will be crossed by the end of this decade.

Tuesday, March 28, 2023

Do We Know How Artificial Intelligence Entities Work?

To me, one of the most interesting things about modern AI systems is that we really don't have a firm understanding of how they work. As usual per my recent posts I'm going to let ChatGPT-4 explain in more detail. Here's my query:

Large Language Models and other deep AI models have numerous layers and a large amount of trainable parameters. Do researchers have a firm understanding of why certain model structures work better than others or is there a fair of amount of guesswork and trial and error involved?

And here's the response:

Although researchers have made significant advancements in understanding deep learning and large language models, there is still a substantial amount of trial and error involved in finding the optimal model architectures. While certain architectural patterns and structures have been shown to work well in practice, a complete understanding of why they work better than others is still an active area of research.

Several factors contribute to the current state of understanding:

  1. Intuition and heuristics: Researchers often rely on intuition and heuristics when designing model architectures. They build upon successful structures from prior research and make modifications to address specific challenges or improve performance. However, this process does not always result in a deep understanding of why a particular structure works better than another.

  2. Complexity and scale: Deep learning models, especially large language models, can be incredibly complex, with millions or even billions of trainable parameters. The sheer scale of these models makes it challenging to develop a comprehensive understanding of their inner workings.

  3. Empirical results: Researchers often evaluate model architectures based on their empirical performance on specific tasks or benchmarks. While this approach can identify architectures that work well in practice, it may not provide a clear theoretical understanding of the underlying reasons for their success.

  4. Lack of a unified theory: Deep learning currently lacks a unified theoretical framework that can comprehensively explain the behavior of different model architectures. Researchers are still working to develop such a framework, which would provide a more systematic understanding of why certain structures work better than others.

Despite these challenges, researchers are actively working to improve their understanding of deep learning models and their architectures. New techniques, such as model visualization, interpretability research, and information-theoretic analysis, are being developed to shed light on the inner workings of these models. As the field of deep learning continues to mature, it is likely that researchers will gain a deeper understanding of why certain model structures work better than others, resulting in more systematic approaches to architecture design.

"They build upon successful structures..." Sounds rather like evolution by trial and error to me.

We have created useful immensely complex entities with mostly guesses and "intuition" of why they work. I once believed that until we understood how the human brain worked, we wouldn't be able to make a lot of progress creating AI systems. But "nature" didn't need to know how brains work in order to cause the existence of a human brain. And it turns out that we don't need to know how the internals of the AI systems work in order to create such systems.

Trial and error gets you a long way with sufficient computing power.

Wednesday, March 22, 2023

Will Artificial Intelligence Make the Internet "Even Worse"

I've seen a large number of people bemoan the fact that AIs make mistakes, therefore they will make the Internet "even worse," especially if the AI is "hacked" (whatever that means). However, these AIs are simply a distorted reflection of people's writings, the Internet, the media (mainstream and otherwise), etc., so, in my opinion cannot really make the Internet "even worse." We'll see, I guess.

I decided to ask for ChatGPT's opinion (GPT-4):

Large Language Model AIs are known to make mistakes. This is not surprising to me since they are trained on data that contains errors. Indeed, perhaps a significant fraction of the data is erroneous. Some people have claimed that since these AIs make these mistakes and generate a great deal of content, they will make the Internet "even worse." Please respond.

Here's the response:

As a large language model AI, I acknowledge that I am not perfect and can make mistakes, due in part to the imperfect data I have been trained on. It is essential to recognize both the potential benefits and drawbacks of AI-generated content in shaping the Internet.

There are several points to consider:

  1. Quality of AI-generated content: Although AI-generated content can be prone to errors, it has improved significantly over time as models are fine-tuned and updated. As AI technology advances, the accuracy and usefulness of generated content will likely continue to improve.

  2. Human-AI collaboration: AI-generated content should be viewed as a tool to support and enhance human creativity, not as a replacement. In a collaborative environment, users can use AI-generated content as a starting point, refining and correcting errors as necessary. This approach can save time and effort while still maintaining quality.

  3. Diversity of information sources: The Internet is an ecosystem where content from various sources coexists. While AI-generated content might contribute to the pool of available information, it is essential for users to rely on multiple sources and critically evaluate the credibility and accuracy of any content they encounter.

  4. Media literacy and education: To minimize the impact of erroneous AI-generated content on the Internet, it is crucial to emphasize media literacy and educate users about the importance of fact-checking, verifying sources, and recognizing potential biases in information.

  5. Guidelines and policies: As AI-generated content becomes more prevalent, it might be necessary to develop guidelines and policies to govern its use, ensuring that it is employed ethically and responsibly.

In conclusion, while AI-generated content may introduce some errors or lower-quality information to the Internet, it also has the potential to contribute positively by offering new perspectives and enhancing productivity. By fostering collaboration between AI and humans, promoting media literacy, and implementing responsible guidelines and policies, we can work towards maximizing the benefits of AI-generated content while minimizing its potential negative impact on the Internet.

Seems right to me (though I think point 5 will be very hard to enforce).

Sunday, March 19, 2023

Large Language Models Generating False Information

The new AI chat bots (such as GPT-3 and GPT-4) sometimes create answers with clearly false information (which is commonly called "hallucinating"). Since they are trained from a corpus derived from the Internet where there is a huge amount of false information, I don't find that surprising in the least. Nonetheless, I thought I'd ask GPT-4 about Large Language Models and their production of responses containing false information:

Sometimes Large Language Models such as GPT-3 are accused of "hallucinating" where they write stuff that sounds correct but is simply not true. One example of this is when these models generate citations that simply don't exist. Why does this happen?

GPT-4's response:

Large language models like GPT-3 can sometimes "hallucinate" information, generating text that appears plausible but is actually incorrect or entirely fabricated. This occurs for a few reasons:

  1. Training data: These models are trained on vast amounts of text data, learning to predict and generate text based on patterns and structures they observe during training. They don't possess a deep understanding of the content; instead, they rely on statistical associations. Consequently, they might generate plausible-sounding but inaccurate statements or citations.

  2. Lack of fact-checking: Language models do not have an inherent mechanism to verify the accuracy of the information they generate. Their primary goal is to produce text that is coherent and contextually appropriate, which can sometimes result in incorrect or fabricated information.

  3. Ambiguity and over-optimization: During training, language models may encounter ambiguous or conflicting information. They learn to optimize for generating text that appears coherent and reasonable, which can sometimes lead to them producing "safe" but false statements.

  4. Bias in training data: If the training data contains incorrect or fabricated information, the model might learn to reproduce similar content. This is a reflection of the biases present in the data it was trained on.

  5. Demand for specificity: When users prompt the model for specific information or details, the model may generate a response even if it doesn't have the exact knowledge. It's attempting to fulfill the user's request, which can result in the model creating nonexistent citations or false statements.

In summary, the phenomenon of "hallucination" in large language models like GPT-3 arises from the limitations of their training data, lack of fact-checking mechanisms, ambiguity in training data, biases, and the pressure to generate specific responses. These models are powerful tools for generating human-like text, but users should be aware of their limitations and always verify the information generated by them.

But don't the reasons for generating bad responses also apply to people? It seems to me the above reasons cause people to write (and believe!) things that just aren't so as well. So I asked ChatGPT to comment on that:

In your most recent response to me, you listed five reasons that Large Language Models sometimes "hallucinate" information. Humans and their writings and speech may also be subject to some or all of those five constraints that you listed which may also cause them to put forth false information. Please comment on how the five reasons you listed might also affect humans.

Here's the response:

Indeed, the five reasons listed for large language models "hallucinating" information can also affect humans in their writings and speech. Let's examine how each of these factors applies to humans:

  1. Limited knowledge: Humans, like language models, possess limited knowledge based on their experiences, education, and exposure to information. When discussing a topic, they might draw on incomplete or inaccurate knowledge, leading to the dissemination of false information.

  2. Lack of fact-checking: Humans may also share information without verifying its accuracy, whether due to time constraints, cognitive biases, or simply being unaware of the need to fact-check. This can result in the spread of false information, rumors, or misleading claims.

  3. Ambiguity and over-optimization: Humans can encounter ambiguous or conflicting information and may form opinions based on incomplete understanding or personal biases. They might simplify complex issues, leading to oversimplified or incorrect conclusions. Additionally, people sometimes over-optimize their communication to persuade others or maintain social harmony, which may involve bending the truth.

  4. Bias in sources: People are exposed to various sources of information, such as news outlets, social media, and personal networks, which may have their own biases. As a result, individuals may adopt and perpetuate these biases, consciously or unconsciously, in their speech and writings.

  5. Demand for specificity: In conversations, humans may feel pressured to provide specific information or details even if they lack sufficient knowledge. To avoid appearing uninformed, they might speculate, guess, or fabricate information, potentially leading to the spread of false information.

In summary, the reasons that large language models like GPT-3 may "hallucinate" information can also be applicable to humans. It is essential to recognize these limitations, be aware of biases, and strive to verify information before sharing it with others.

So there are similarities to why both Large Language Models and people end up writing stuff that is simply and verifiably wrong. The internals are a bit different, for example AIs don't even have a concept of "appearing uninformed," but the end result is the same. Garbage in, garbage out.

 

Friday, March 17, 2023

Friday, March 10, 2023

Blogging Again

After a two-and-a-half year break, I find my self blogging again for 2 reasons: AI tools have become fun to consider and write about and AI tools (ChatGPT in particular) have made it much faster to write about stuff. I'm finding that my first step in writing about anything is to query ChatGPT and start with its response. So far, I've explicitly identified anything written by ChatGPT in these posts. However, in the future, I may (at least) sometimes just take the response and edit it and call it a post.

I don't have readers or commenters anymore, but it's still fun to think and write. And learn to use the tools!

Thursday, March 09, 2023

Artificial Intelligence and the Screechy Violin

The current set of AI tools such as ChatGPT and Dall-e have evoked a really wide range of responses. Some people are, in my opinion, somewhat over-enamored with the current capabilities of such tools, while others dismiss them out-of-hand as being inaccurate and (close to) useless.

To the latter group, I suggest considering the violin. My recent query to ChatGPT related to this topic is:

Please write a short essay comparing current, early use of AI tools to a young child learning the violin making horrible sounds, while predicting later, more sophisticated use of more sophisticated AI tools with a master violinist such as Yehudi Menuhin creating music that sounds so good that it's as if the music comes from heaven itself and is played by angels.

ChatGPT's response:

Artificial intelligence (AI) has come a long way in recent years, but we are still in the early stages of its development. Just as a young child learning the violin may struggle to produce beautiful music and create horrible sounds at first, AI tools are still limited in their capabilities and can produce errors or inaccuracies.

However, just as a young violinist can become a master musician with years of practice and refinement, AI tools have the potential to become incredibly sophisticated with further development. With more advanced algorithms, more comprehensive data sets, and more sophisticated training methods, AI tools will be able to accomplish increasingly complex tasks with greater accuracy and precision.

For example, consider the legendary violinist Yehudi Menuhin. His skill and mastery of the instrument were so great that his performances were often compared to the work of angels. In the same way, future AI tools could perform tasks with such precision and efficiency that their work appears almost magical. They could produce art, music, or writing that is indistinguishable from the work of a human expert.

However, just as a child must practice and refine their skills over time to become a master musician, AI tools must undergo rigorous testing and refinement to reach their full potential. They must be trained on vast data sets, and their algorithms must be carefully optimized to ensure that they can perform their tasks accurately and efficiently.

In conclusion, the early use of AI tools can be compared to a young child learning the violin, while the potential of more sophisticated AI tools can be likened to a master violinist like Yehudi Menuhin. With further development and refinement, AI tools have the potential to accomplish amazing feats and produce works of art that are truly awe-inspiring.

While ChatGPT's response focused on AI tools improving, I think the utility of AI tools will also depend heavily on the user. For example, I'm writing this paragraph because my query apparently wasn't quite sufficiently detailed to know to also include the improvement of the users' abilities to use the tools in the response. I guess I'm still in the category of the young child screeching on the violin. Or perhaps it's just that the violin is at this point a cheap beginner's instrument instead of a Stradivarius?

 

Friday, February 17, 2023

Digital Ghosts

Today my question for ChatGPT was:

The other day I asked you to write something in the style of Yoda and you did a good job. If everything some person wrote over a several year period was used for reinforcement learning, how closely do you think you would be able to mimic that person? And would that mimicry be a sort of "digital ghost" of that person, especially if they were dead?

And ChatGPT's answer is:

As an AI language model, I am capable of learning from large amounts of text data through a process called "training", which involves adjusting the parameters of my neural network to maximize the likelihood of generating coherent and fluent text. If a person's written work over a several-year period were used as the training data for me, I could potentially learn to mimic their writing style, including their choice of words, sentence structures, and other linguistic features.

The degree of similarity between my output and the original author's writing would depend on various factors, such as the size and quality of the training data, the specific techniques used for training and fine-tuning, and the complexity and diversity of the writing style. However, with enough high-quality training data and optimization, it's possible that I could produce text that is indistinguishable from the original author's writing, at least to some extent.

Regarding the question of whether my output would constitute a "digital ghost" of the original author, that is a philosophical and ethical question that has been debated by scholars and experts in the field of AI ethics. While I can generate text that closely resembles the original author's style and content, I don't have consciousness or agency, and my output is determined solely by the input data and the parameters of my neural network. Therefore, it's up to human judgment to determine whether such text constitutes a form of "digital resurrection" or a mere mechanical replication of the original author's work.

To me, it would indeed by a "mere mechanical replication," but so what? Let's say I had, I dunno, a favorite grandfather or something like that. And let's say that a great deal of his life for several years was recorded: what he said, what he wrote, etc. Furthermore, let's say there was a lot of video of him speaking, catching his tone of voice, his facial expressions, etc.

While ChatGPT is text based, future variants will likely have avatars and be video based. There's not a big trick to making an AI bot sound extremely similar to someone (it's already being done) and not so hard to make an avatar look like someone including facial expressions (already being done, but will be even better in the future).

Giving this hypothetical reinforcement learning to be my grandfather, some future variant of ChatGPT would enable me to sit down and have a conversation with his digital ghost! Basically forever, even though he's been dead for decades!

To me, the concept is both super cool and ultra creepy at the same time.

This reflection of my grandfather is lost for all time. But our children (grandchildren, etc.) already have a great deal of their life recorded. AI avatars of them will enable their descendants (and anyone else who is interested) to interact with these digital ghosts for eternity!

Wednesday, February 08, 2023

The Art of Economics: Understanding the Complexities of the Global Market

 Ok, I'm wasting a little more time with chatGPT today. The title of this post and the following are chatGPT's response to my query: 'In the style of the "Great Guys Weblog", write a post about economics.'

Greetings fellow economy enthusiasts! Today we're diving into the fascinating world of economics and exploring the intricacies that make this field so important.

Economics is the study of how societies allocate limited resources among unlimited wants and needs. It's a complex subject that touches every aspect of our lives, from the products we buy to the jobs we work. Understanding economics is crucial to making informed decisions, both as individuals and as a society.

At its core, economics is about understanding how markets work. This includes analyzing supply and demand, as well as the factors that impact them, such as government policies, technological advancements, and cultural trends. By studying economics, we can gain insights into the driving forces behind the global market and make predictions about how it will evolve in the future.

One of the key principles of economics is the concept of opportunity cost. This is the idea that there is a cost associated with every choice we make, whether it's a personal or a business decision. In other words, when we choose to spend our money on one thing, we are giving up the opportunity to spend it on something else. Understanding opportunity cost is crucial to making informed decisions, as it helps us weigh the costs and benefits of different options.

Another important aspect of economics is macroeconomics. This branch of economics focuses on the overall performance of the economy, including topics such as inflation, unemployment, and economic growth. By analyzing macroeconomic data, economists can gain insights into the health of the economy and make predictions about its future performance.

In conclusion, economics is a complex and fascinating field that touches every aspect of our lives. Whether you're an individual making personal decisions or a business owner making strategic choices, understanding economics is crucial to making informed decisions. So why not dive in and explore the art of economics today? Who knows, you might just find your new favorite subject!

So it doesn't seem to quite know "Great Guys Weblog" but still a pretty good (and enthusiastic) description of economics that I certainly agree with!

Fun with ChatGPT

Like a whole lot of other people, I've been having a lot of fun with chatGPT. Today I asked it to describe Pickleball (I'm somewhat of a fanatic) in the style of Yoda. Here's what it gave me:

Interesting game, Pickleball is. Played with paddles, it is, and a ball like a wiffle ball. On a court, small and with netting, it is played. Volleys back and forth, players attempt to score, hmmm? Fast-paced, yet strategic, Pickleball can be. Fun, it is, for young and old, hmmm?

And may the Pickleball force be with you! :-)

Wednesday, September 30, 2020

The Election of NOT

This election is between NOT Trump and NOT(NOT Trump). Before you point out that 2 "NOTs" cancel out, they don't in this usage. For example, a pear is a fruit that's NOT a grapefruit, but a fruit that's NOT a pear isn't inherently a grapefruit - it might be a blackberry.

Our first candidate, NOT Trump, happens to be Joe Biden, but it could be a rabid dog or a rotten tomato and it would still get a similar number of votes. Biden is a nice enough fellow. There's concern by some about his age and declining mental capabilities, concern by others about his occasionally sniffing about the ladies when maybe he ought not, concerns by still others about signs of corruption, and various other fairly mild personality concerns, but really, he's one of the most milquetoast candidates for president in quite some time.

But it doesn't seem that a lot of people are really that excited about Biden and really think he'll be a great President. Rather, they're voting for him primarily because he's NOT Trump. Again if it was a rabid dog instead of Biden they'd still be voting NOT Trump.

But how many people are voting FOR Trump? Some, for sure, but I suspect not all that many. I think never-Trumpers are a good example. One commenter here, PatrickH, claimed to be a never-Trumper, but said he was gonna vote for Trump this time around. At first I thought that was contradictory and certainly sounds contradictory, but then I realized he wouldn't vote FOR Trump but he would pull the lever (or punch the chad or whatever) for NOT(NOT Trump) and that it isn't contradictory at all! The fact that NOT(NOT Trump) happens to be Trump in this case is immaterial.

So what's so scary about NOT Trump that people would consider voting for NOT(NOT Trump)? Hardly anything really. EXCEPT! NOT Trump coupled with far left Democrats controlling all other branches of government scares a LOT of people to death. And when I write all other branches, I mean ALL other branches. It's believed, perhaps incorrectly, that NOT Trump with a democrat controlled congress will pack the Supreme court with far left judges that will strongly assist in completely remaking America, and not in ways that will benefit non-Democrats or Conservatives. Indeed, the belief of non-Democrats and Conservatives is that this remaking of America will badly damage their well-being in many ways from economic to spiritual.

Note that many of these beliefs about how they're going to be damaged may not be objectively true. But the NOT Trump party has done a poor job of allaying these fears and NOT(NOT Trump) has been able to exploit these beliefs to his advantage. Here are a small subset of the fears:

Religion: The vast majority of atheists and anti-religionists belong to the NOT Trump party and many of those do think that religion is a very bad thing and should be squashed as much as possible. For evidence look no further than the contentiousness of nominating Amy Coney Barrett as a Supreme Court Justice. She's Catholic and it's clear that the left believes that somewhat devout Catholics should NOT be allowed on the Supreme Court no matter what. If you don't believe that to be true, show me one at least somewhat devout Catholic that the left would accept as a Supreme Court justice going forward.

Manufacturing Jobs: While NOT(NOT Trump)'s foreign policy includes trade wars and border walls, NOT Trump's foreign policy is likely to be more globalist and more pro-China. While NOT(NOT Trump)'s policy may or may not have increased manufacturing jobs in the United States, it is strongly perceived by many that it did and NOT(NOT Trump) has very successfully exploited this perception.

Riots: Many who will vote NOT(NOT Trump) are watching with horror and fascination as Democrat run cities with Democrat headed police in Democrat controlled states burn because of riots (allegedly) caused by Democrat controlled police forces brutally killing black males. NOT Trump and the NOT Trump party have been slow to condemn the violence leaving many to fear that the whole country will burn if NOT Trump is elected. This fear is perception and not necessarily reality but NOT Trump and party have done little to nothing to alleviate the fear.

Abortion: Many people are very anti-abortion and want to limit it as much as possible. NOT(NOT Trump) has been much more supportive of their position than many of the NOT Trump party.

Systemic Racism and Other Wokeness: Many people greatly fear the concept of Systemic Racism, Critical Race Theory, White Fragility, etc. After all, the general concept is that all white people are racist (and therefore evil) no matter what. It's not surprising that not everyone wants to jump on that bandwagon. NOT(NOT Trump) has banned Critical Race Theory training for government institutions and that was very appealing to many.

And many more.

Again, all of these things are perceptions and fears as opposed to some cast-in-concrete objective future reality. But what is certain to me, is that the party of NOT Trump has not only done very little to address these fears and perceptions, but has in many cases actively stoked them and has "othered" those with different perceptions, goals and beliefs (for example, Obama's "bitter clingers," Hillary's "basket of deplorables," etc.). The problem with "othering" many tens of millions of people is that a very large "other" is created and they become the enemy.

And that enemy is voting for NOT(NOT Trump). Not because they like Trump but because NOT Trump is extremely scary to them.

Who am I voting for? Well, I endorse NOT Trump. Mostly because many of those I care about are severely negatively affected by Trump being president. For the most part, they're not directly or tangibly adversely affected, rather the mere circumstance of Trump being president badly damages their mental health and well-being.

If not for that, I might have been a NOT(NOT Trump) voter. After all, trade wars and border walls coupled with ever rising minimum wages is fantastic for a roboticist like me and I'm guessing I'll be thousands or tens of thousands of dollars richer if NOT(NOT Trump) wins.

Sunday, September 27, 2020

Looking for the Math gene

Imagine if being shortsighted, a bit shy and socially awkward, not much handsome or strong, would still end up being a great boost to your chances of making babies and passing on your genes?


That’s what arguing for a genetic ability for ‘mathiness’ may entail, as the qualities above are pretty common among ‘mathy’ people (I know, I live among them). I can get why at least ‘mathy’ people would very much like to believe that :-)


As I see it, if anything, genes for “mathiness” would be more of an evolutionary burden than a gift. At least up to the last three decades, when being a ‘nerd’ shifted to being acceptable or even a positive trait – though in social circles where they also have lower than replacement reproductive rates, not helping much with the evolutionary part.


Yet, as Bret may be arguing, it is undeniable the influence of Jewish heritage, particularly of the Ashkenazi sort, in the mathematical sciences of the last 2 centuries. The disproportinate presence of Jews in modern Academia has been a source of envy with fateful consequences, such as Nazi Germany banning a sizable part of their own academic elite - handing their enemies a most valuable resource, as those same minds led America to the ultimate weapon (and the best proof that “karma is a b****” you may ever find).


Is it possible that Ashkenazi “mathiness” is a genetic trait, as Bret poses? We know intelligence is heritable, and there is even a (reasonable?) case on Ashkenazi IQ being above average. Yet geneticists have been looking – very unsuccesfully – for “gay genes” for half a century now. I wonder, if a nearly primal thing as love for humans can’t easily be represented by a set of genes, what to say about love for numbers?


But if we are to invoke history, we must go all the way through. After all, notwithstanding the cultural hallmarks of Israel, it is not there that you’ll find the great pyramids, Giza (c. 2500 BC) being built way before Abraham or the Kingdom of Judah (c. 900 BC) were around.


The mathematical acumen of the Egyptians was probably acquired by the Babylonians before 1600 BC. Though also a semitic people, they enslaved their Hebrew cousins a thousand years later, and we can conjecture the captives must have learned some math too – the Torah/Old Testament does present the number ‘pi’ as 3 (though Babylonians knew it with a few more decimal places). By the time Judeans were getting back their land (c. 540 BC), the torch of ancient Math was being passed on to another people of no semitic kinship: the Greek.


By then Thales of Miletus had already invented the fundamental stone of proper Mathematics, the Axiomatic method. He used it to prove the first theorems in geometry we know of – though he probably got them from the Egyptians who ‘knew’ it without formal proof. A generation later Pythagoras (or whatever group of people under that name) would initiate that famed school of thought, after traveling around Egypt and Persia, drinking from those mathematical sources too.


In the next 300 years the Greek would advance Math beyond anything seen previously, reaching their highest point with Euclid’s Elements (in Alexandria) and, a generation later, the greatest mathematician of the ancient world: Archimedes (288-212 BC) of Syracuse, though he did study in Alexandria too. This man will be responsible, nearly 17 centuries later, for the resurrection of the Heliocentric system (Copernicus got the idea from an Archimedes’ book, though Archimedes himself built on another Greek, Aristharcus of Samos); and the birth of integro-differential calculus by Fermat-Newton-Leibniz, both hallmarks of the modern scientific revolution.


At this point, maybe a keen observer back then would be justified at wondering about a Hellenistic gene for mathematics, except they had no idea about genes and so far Egyptians, Babylonians, Persians and Greek had not too many genetic connections. They did have cultural bridges built along history though. ‘Nature 0’ X ‘Nurture 1’ so far.


Archimedes will die by the hands of the new up and coming Empire - the Romans - in the second punic war, because his king (and cousin) made the mistake to betray the Romans for Carthago, a city of Phoenician background (so another cousin of the Judeans) trying their hand in the great geopolitical game. The relevant mathematicians of the next few centuries will mostly be around the Library of Alexandria (in Egypt, at some point Roman possession too). For all their mastering of engineering techniques, the Romans themselves won’t contribute much else to fundamental mathematics. We also know that Hellenistic cities of this ancient period had more literacy rates than Israeli ones, for example. So apparently, having a good Library was of much greater value than any genetic consideration back then. ‘Nature 0’ X ‘Nurture 2’.


A bit over 600 years later, the (western) Romans will fall to “barbarians” with no mathematical knowledge whatsoever, taking down also anything resembling an “education system”, with libraries (and whole cities) burnt, no more tutoring paths to Roman citizens, no more engineering corps and orderly societies under Roman pax and law. Western Europe will forget most of the Greek-Roman former ‘high culture’, Greek mathematics very much included. They will take more than 700 years to rediscover it, by translating it from Arab back to Latin after expelling the Muslims from Toledo (Spain) and taking the great Library the Arabs built therein – igniting a process that will lead to the European Renascence a couple of centuries later. There again, ‘Nature 0’ X ‘Nurture 3’.


By the time the Romans fell, they had expelled the Jews out of Judea for nearly 400 years. Though literacy rate of Jews before diaspora was probably below 3%, post-diaspora Jews were mainly influenced by their more nerdy faction, the Pharisees, who placed great emphasis on teaching male Jews from a young age to read their sacred texts. Yet, after six hundred years of diaspora the worldwide Jew population fell from 5 to merely 1 million, if much. A good deal of those lost Jews were not dead, but probably gave up on being Jews, for it was too taxing to keep the strict Pharisaic laws.


It is possible that this ‘selective pressure’ among Jews themselves drove the “strongest/smartest” to stay? Is it possible that continued formal education throughout centuries of father-to-son (or Rabbi-to-students) led to (be it ‘evolutive’ and/or ‘Lamarck-like’ – mind you, epigenetics is in fashion again) a sort of smarter people?


I don’t know, but in what refers strictly to Mathematics that was hardly the case: the new hot point was the Arab world, who greatly developed our computational capability by the introduction of a more intelligent notation (algarisms) and upon incorporating a grossly underestimated invention from the Indians: the zero. They also had libraries full of that old Greek wonderful math.


You will first hear of Jewish mathematicians along history in Spain circa 1100, back in the intersection among Muslims, Jews and Christians, where the last two are trying to cacht up with the first one.


Even allowing that capacity for language may lead to mathematical skills (since math is a kind of language too), there is still the point that literacy here isn’t a very well defined concept. A sizable proportion of male Jews were exposed to reading from an early age, but how efficient was that? Up to the 1600s (previous to Gutemberg), the best case scenarios would be the most devot (or connected to Rabbinic service) would read much of a very limited literature (few books around); the most common scenario is the one of a majority that would scarcely read any literature in their everyday life. Many probably even forgot what they’ve learned when kids. To drive home this point, even in relatively modern Tsarist empire of 1897, one third of Jewish male population was illiterate. I doubt pre-1600s was even half as good as that.


But let us suppose that a good number (say, at least 50% of males) of Ashkenazi in post-1600 Europe were not only literate, but actually used letters in their everyday life in meaningful ways. They certainly had a head start compared to the rest of the European population. Let us also say that at least 20% of these (hence 10% of total male population) used mathematics – four operations basic stuff at least– in their everyday life in meaningful ways. It remains the question: would 2 or 3 hundred years (give or take a few more if you wish) be enough for selection pressure to act on this group?


Just for comparison, lactose tolerance was developed among European populations in a timeframe considered real quick: a few (3 to 5, give or take) thousand years. And that’s for a genetic variation that depends on far, far less genes than a trait like ‘intelligence’.


I don’t know about you, but I am willing to bet that whatever points “nature” scores on this matter, “nurture” will be far off in the scoreboard.

Monday, September 21, 2020

Eugenist in Chief

 In "The World According to Bret", we learn that this is a complex world - one where millions of lines wouldn't suffice to deal with the complexity of race in America, for example.


I believe the world is indeed very complex, though race in America doesn't look to be particularly so. 


With races, it is all very simple: we have genes, and some have superior genes, while others don't. I've learned it with The President of the United States of America, so it must be right:

 

"You have good genes, you know that, right? You have good genes. A lot of it is about the genes, isn't it, don't you believe? The racehorse theory. You think we're so different? You have good genes in Minnesota."

 

How could Bret disagree?

 

 


Thursday, September 17, 2020

The World According to Bret

The world is really complicated. I can't figure it out anymore. That's one reason I haven't been blogging much. I find myself unable to put together any coherent arguments with sufficient context to make sense to anyone including myself.

Consider the following image:

You've probably seen this illusion before or at least one like it. The two circles are the same color (yellow). Yet they look completely different because of the context. And the context is kinda similar: a bunch of greenish and purplish stripes.

I subscribe to only one newspaper: The New York Times. It's somewhat left of center in the political spectrum of the United States. I read a number of blogs, most of them to the right of center.

What is striking to me is the difference in context between the left of center New York Times and the right of center media that I read. They can take the exact same yellow circle and make it look completely different.

A trivial example is "mostly peaceful protests." If you have 100 protestors and 95 of them are perfectly peaceful and 5 are not peaceful, it is, by definition, a mostly peaceful protest. Indeed, it's 20 times more peaceful than a protest in which all 100 people were not peaceful.

Now we can build a context around the "mostly peaceful protest." We can focus on the 95 peaceful protestors and their message and their treatment by their ideological opposition and by police and authorities. Perhaps they were tear gassed. Perhaps the were forced to stop protesting. Perhaps they have a very important message that's being stifled. And so forth.

Or our context can be what the 5 not peaceful protestors are doing. Perhaps they're attacking police. Perhaps they're burning dumpsters or even buildings. Perhaps there's other violence: arson, rape, even murder. Our context can be full of the lawlessness and chaos of the non peaceful protestors and can be quite frightening. And so forth.

Same story either way, but radically different contexts which causes radically different optics and impressions.

When I was 15 I was a communist. At 30 I was a libertarian. At 45 I was sort of a conservative. Now? I'm the devil's advocate. Whatever you tell me, I have a strong inclination to argue the opposite. Fortunately, I've also learned to mostly keep my mouth shut because, let me tell you, always taking the other side doesn't tend to make a lot of friends!!

When advocating for the other side, I often am told, "I can't fathom why you believe X, I've just provided evidence that contradicts it!!!!" But that evidence is like the circle above. I said the circle is yellow and they showed me a circle that looks orange within their context, within their perspective, the context on the left. To me the circle looks (or at least can look) yellow, because I can see the circle within the context on the right.

I often try to estimate how many (English) words it would take to fully describe an issue. For example, I estimate that to fully describe race relations in the United States with complete context would be more than a trillion words. I've perhaps read a million words on the subject and based on that extremely incomplete yet still significant knowledge I have developed a certain feel/intuition about the subject.

If you now show me a one-thousand word article that doesn't match that intuition, it's not gonna affect me all that much. Why? First, like every human I suffer from confirmation bias and I tend to discount things that don't fit within my worldview. Second, even if I get over my confirmation bias, you've shown me one-thousandth the information I've already processed, a lot of it already in conflict, so it wouldn't make sense for me to suddenly ignore everything I "know" and adopt a completely new perspective based on this small new bit of evidence you've shown me. Third, while your new information looks like an orange circle within the context you've provided, I probably don't share that context so the circle looks yellow to me.

To put it another way, I know a millionth of what there is to know about race relations, you then showed me a billionth of what there is to know, and either way we'll both continue to swim in a vast sea of ignorance regarding that particular subject with alarmingly incomplete and distorted contexts. Oh sure, I may be off by a couple of orders of magnitude with my estimates but the gist remains the same.

That's one subject. There are millions of subjects with similar complexity so that sea of ignorance becomes a universe (multiverse?) of ignorance.

Given that ignorance is bliss, I'm apparently so blissed out that I'm unaware of it!

Have a blissful day!

Wednesday, July 08, 2020

X is Hitler?

In my lifetime, there have been a lot of people compared to Hitler - especially presidents: currently Trump, but before him Obama, and before him Bush (who was often called "bushitler"), and before them many others.

It's so prevalent that Godwin's law was created: "as an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1".

But none of these Hitler comparisons resonate with me. That's because for Hitler, specific attributes, actions and characteristics subjectively stand out to me in sort of a personal Rorschach test where instead of looking at an inkblot I picture or think about Hitler. The very first thing I think of when I think of Hitler is that he put millions of Jews (and others) in concentration camps in terrible conditions then horrifically killed the vast majority of them. Note the past tense of that last statement. It's not that Hitler might or would have done that in the future, but rather he actually did do that. To me, that's the number one unique characteristic of Hitler that pops into my mind when I think about him.

But if I ignore that, my next Rorschach response is that he was a major instigator of the biggest global conflict ever (WWII) that claimed many tens of millions of lives. Again, note the past tense.

If I ignore both the genocide and the world war, then other more trivial things jump out at me; perhaps things that were enablers of some of his terrible actions. For example, I believe that he was fairly intelligent, extremely well read, a persuasive orator, and charismatic. Lastly, during my Hitler Rorschach test, for some reason I generally remember that he was a vegetarian for part of his life.

After that, my Hitler Rorschach test yields very little response. That's not to say that objectively Hitler didn't have many other attributes, actions and characteristics that defined him. I'm just saying that when Hitler pops up in my mind (pretty much universally because someone else brought him up), subjectively, those handful of things are what I think of.

So when I hear "X is Hitler" it just doesn't resonate or convince me because subjectively, to me, if X hasn't actually killed millions in a concentration camp based genocide and started a world war, then X is simply much different than Hitler in my mind.

A variant is "X is as bad as Hitler." For example, "Trump is as bad as Hitler because ICE puts foreign children in cages." This is also subjective, but to me, no U.S. president or politician has come close to being as bad as Hitler. It's bad that foreign children are separated from their parents and detained (put in cages), but there are orders of magnitude fewer children and many orders of magnitude fewer children are killed (and none intentionally) relative to Jewish children in Hitler's Germany. The size of the atrocity makes a difference to me. Sure, one child dying is really bad, but 1,000,000 children dying is a lot worse to me.

As I mentioned above, the tense of the verb matters (subjectively) to me. Thus, "X might/will become as bad as Hitler" just doesn't work for me at least partly because even Hitler wouldn't have become as bad as Hitler except in the very specific set of circumstances that he lived (childhood, WWI, Treaty of Versailles, external influences, economic depression, etc.). A variant of this is "X is on the same path as Hitler." That's more possible but none of our Presidents nor politicians have been on a path anything like Hitler's in my opinion. For example, none of them have written a screed like Mein Kampf which foretold Hitler's ambition and genocidal nature.

I also question to what extent "X is like Hitler" or one of the many variants I listed above more so than "X is like Y" where Y is some other tyrant such as Franco, Stalin, Pinochet, Ivan the Terrible, Alexander the Great, etc. Why is X more like Hitler than any of these others? There are thousands of tyrants to choose from.

I'm always wondering why Hitler?

Tuesday, April 07, 2020

Until the End of Time

Here I sit in California where there is a (partial) lock down of its citizens' activities for the purposes of "social distancing" in order to (partially) reduce the impact the COVID-19 epidemic.

The toll the virus is having is bad. While merely hundreds of people have died in California as I write this, many thousands have died in New York (mostly in and around the city) and more than 10,000 have died in the United States.

The toll from the lock down is not insignificant. Many have encountered or are facing economic devastation from which they will never recover. Mental health issues including domestic abuse (from going stir crazy), depression, and suicide are likely to be much high than usual. Other adverse health impacts (for example from not being able to exercise since parks and gyms are closed) and life impacts (marriages and things like court cases being substantially delayed) are increasing by the day.

Someone had to analyze the tradeoffs between the impact of the virus and the adverse effects of a lock down. Because of the federal structure of the United States, this responsibility falls primarily to the governors and mayors of the country (states and localities).
According to the Centers for Disease Control and Prevention (or CDC), state governments, and not the federal government, have most of the power to place people in isolation or quarantine under certain circumstances.
I don't envy those who have had to analyze such tradeoffs and make decisions based on that analysis. It's definitely a lose-lose situation and it's impossible to know with the limited and erroneous data available at the moment what overall impact a given decision will have.

The governor of New York, Andrew Cuomo, shut down New York state in the face of exponentially increasing deaths from COVID-19. All-in-all, I personally thought he did a good job in both making and communicating his decisions.

In the press conference where he announced the lock down, he said a couple things that I think are notable. The first was "I take full responsibility for this decision [to lock down New York]." I applaud that and that statement is part of the public record so he owns it and will own it in the future.

However, I found one of his statements quite frightening: "if everything we do saves just one life, I'll be happy." The reason I find this frightening is I've been wondering just what the threshold is in order to justify extreme measures such as a lock down.

Approximately 1,000 people die every year in New York from influenza or resulting complications. There's no doubt a yearly winter long social distancing lock down would save at least one life. Could that justify an annual lock down? A permanent lock down? If not, why not? What is the threshold in lives that justifies it? Did that threshold get lower now because of COVID-19?

There is little doubt that governments at all levels have broad powers that pretty much trump the constitution when there is a state of emergency. But defining an emergency is a subjective thing. About 3 million people die from all causes in the United States every year. If draconian measures are taken, that number could be reduced substantially. Is anything that causes significant death sufficient to declare a state of emergency? If not, why not?

Some estimates of the COVID-19 impact were millions dead in the United States. However, those numbers came from models that were based on terribly incomplete and probably extremely erroneous data. Are worst case numbers based on bogus data sufficient to declare a state of emergency? That's pretty much what happened in this case. Those numbers might turn out to be correct but that's very unlikely. But we'll never know since we don't have an alternate universe in which to test different courses of action (including doing nothing).

I find the issue of punishing everybody to protect the few interesting as well. A healthy 20-year-old has very little (or at least much less) to fear from the virus. To restrict and harm a whole and very large class of people (young) in order to benefit other people (older folks like me) without any sort of compensation seems wrong to me. The psychology of that may backfire at some point - 20-year-olds may eventually say "enough" and ignore the lockdowns.

I'm also concerned that giving police extraordinary powers is often a really bad idea because it gives a great deal of authority to people who like power and authority and often aren't terribly thoughtful or responsible. For example, yesterday police arrested a man paddleboarding in the middle of the ocean for violating California's lock down laws. They got two boats in order to chase him down, forced him to shore, handcuffed him, and took him to the nearby Sheriff's station to book him.

https://static.pjmedia.com/lifestyle/user-content/36/files/2020/04/PADDLEBOARD-WITH-COPS.jpg

I guess the thinking is that if they allow one guy to paddleboard in the middle of the ocean by himself, then everybody's gonna paddleboard in the middle of the ocean by themselves and, uh, and, well, I'm not sure what the problem would be with that. Indeed, it seems to me that the police were abusing their authority and not using common sense and wasting resources.

In full disclosure, I've been sneaking on to the beach (all the beaches are closed along with the ocean) in the middle of the night in order to run because my knees only can withstand running on very soft surfaces such as deep sand. So I might also end up in the slammer like the paddleboarder. I would stick to bicycling but they've also closed all the bicycling trails. They've also closed all the gyms. So I can either sit around and get fat and out-of-shape and unhealthy or I can break the laws of this state of emergency.

But it's not just people playin' in the parks:
LA Mayor Eric Garcetti has announced plans for a spying program to look for businesses that are open. He announced this week that he has already shut off the water and power to eight businesses that he didn't deem "essential."
Essential? Essential to whom? That's another subjective term. In Vermont, clothing seems to be considered non-essential:
...retailers such as Target, Walmart and Costco are now required to limit the sales of nonessential items in order to mitigate the spread of COVID-19.
The directive was announced by the Agency of Commerce and Community Development on Tuesday. The agency hopes it will reduce the overall number of people going into stores to purchase items such as clothing...
Yet I imagine if someone goes to the grocery store naked, the police would decide that clothing was indeed essential!

Where does it end? COVID-19 may be with us forever now. It may or may not be susceptible to a vaccine (other corona viruses are just colds and nobody has come up with a vaccine for them yet). It may or may not ever be reliably treatable (perhaps anti-virals will work to some degree but they probably won't be completely effective). People are still arguing over whether or not masks will help significantly (I personally am absolutely convinced masks will help).

So keeping the lock down going forever may always save one more life and give Cuomo and others the reason they need to keep New York and much of the rest of the country locked down until the end of time.

Tuesday, February 11, 2020

Recent Real World Example of Efficiency vs. Resilience

I'm not a very good libertarian because I don't support unlimited free trade. One reason is that unrestricted free trade tends to concentrate manufacturing of given products in a small number of countries. That's very efficient, of course, because of amazing economies of scale. But what if something happens to one of those massive manufacturers? For example, in this post I wrote:
As a roboticist, I have almost a fetish for electric motors and actuators and the production thereof. While I’ve never visited their factory in China (Hong Kong area), some colleagues that have visited it describe Johnson Electric[johnson] as one of the most awesomely efficient motor production facilities in the world; in one end goes copper ore and other raw materials and out the other end comes millions of motors per day. It’s a shining example of economies of scale and efficiency. Their specialty is automotive electric motors (for power windows, for example) and they produce a significant fraction of all motors worldwide in that niche. If trade restrictions and tariffs were further reduced, no doubt they would have even a larger share of the market and be even more efficient and be able to produce and sell the motors at a somewhat lower cost.
I imagine that part of the appeal of free trade is that there would be many extremely efficient companies like Johnson Electric, each thriving in a specific niche with tremendous volumes, yet with enough competition from a handful of other companies to drive relentless innovation, quality improvement, and cost reduction.
However, there’s potentially a downside to such a scenario. What happens if something happens to Johnson Electric? What happens if there’s political unrest (war), a fire, or a natural disaster?
To war, fire and natural disaster we can now add epidemic. China's latest coronavirus problem is causing economic turmoil beyond the epicenters of the epidemic:
The coronavirus outbreak in China has generated economic waves that are rocking global commodities markets and disrupting the supply networks that act as the backbone of the global economy.
Some of this would happen if there is any international trade at all. But the more free trade there is, the more susceptible we all are to the economic disruptions that have been (and will be) caused by the virus. Having multiple sources for those things manufactured predominantly in China would be less efficient in the general case, but more resilient in the face of issues in one part of the world.

My company has been adversely affected (but not badly) by these disruptions. We have most of our electronics boards manufactured in China and we had to scramble to move that production elsewhere because the factories we normally use have been shut down due to quarantines.

Nobody twisted our arms and told us we had to manufacture in China, but in a world with minimal tariffs and minimal trade restrictions, that's simply what naturally happens. If China hadn't taken over manufacturing small quantities of boards, domestic companies would develop and while they wouldn't be as inexpensive and efficient as China was before the coronavirus outbreak, I believe it would be close enough to not matter much. In other words, less efficient, but not terribly less efficient.

Somewhat less efficient but more resilient.