By Chris Liu
I’m a physics alum from the University of Michigan but I’m not confident I could ace a freshman physics final — let alone be able to complete it. Given some time to look at the materials and formulas, I’m sure I could muster something, but the confidence simply isn’t there. In all fairness, I’m not alone on this. Many of us have forgotten much of the material we learned in college. As someone working in a field that is only tangentially related to mechanics, I have no need to solve for the forces involved with a cube balancing on a sphere; general concepts/terms are sufficient.
Conversely, my dad is an electrical engineer and workaholic. Even as he approaches retirement age, he still eagerly works over 50 hours a week. As one of the top students in his province in China, he’s evidently intelligent and one of the most industrious people I know. I remember when I was taking my first year electricity and magnetism (E&M) class in college, COVID-19 struck and the semester ended virtually. Being at home, I was able to ask my dad some questions about my homeworks, and he became gleeful at the fact that he could still solve all of the problems. While my problem sets are technically related to his work, they are definitely not a type of calculation he has had to manually perform in decades. And yet, recalling the fundamentals of E&M, he was able to quickly figure out once again how to solve such elementary yet tricky problems.
With the existence of the internet (e.g. Chegg) and now especially AI, people claim textbooks are useless, but I often need to refer to mine in my current role as an engineer. Learning on the job is great but doesn’t lead to the higher understanding my father’s expertise has made me crave. Fundamentals are key to understanding virtually anything, but especially highly complex/technical subjects. I often feel imposter syndrome about my piano playing abilities since my relatively late-start and few years of formal lessons have left me with the ability to play at a decent level in the music I want to play, and yet difficulty with sight-reading and even certain arpeggios. In truth, I feel slight embarrassment at this fact.
In great contrast to my own learning mindset, I recently saw a tweet stating that when learning something new, fundamentals should be skipped. It is better to skip straight to advanced topics, since those topics will be made up of simpler ones. Not only is this mindset immediately wrong, going against basic pedagogical principles (not that this person values principles, guess what kind of person tweeted this), it also presents someone who probably has never had to learn anything that has significant depth. In regards to my own interests, it is nonsensical to suggest that a beginner could immediately start with quantum mechanics not understanding math and elementary physics, or could play Hungarian Rhapsody No. 2 by Liszt not knowing how to play a scale. Although, I am sure that for the former, there are people who believe that knowing of Schroedinger’s cat demonstrates their understanding of the field a la Dunning-Kreuger effect.
My bias against the now saturated field of computer “science” may play into this belief of mine, but the relative ease at which one can learn new coding languages, especially with the existence of stack exchange and now AI, has led certain people to believe that this ease applies to any field. While it is true that learning is now theoretically easier than it has ever been due to the existence of the internet, that does not mean that proper understanding of fundamentals, or time investment, is no longer required to become competent in a subject area. I also recently saw a tweet (with many agreers in the comments) stating that the tweeter only uses AI (e.g. Grok, Chat GPT) for searching up information, as opposed to traditional search engines like Google. Certainly the quality of sites like Google have unfortunately degraded (partially due to AI, mind you), but one can still find the correct websites containing accurate information on nearly any subject. Finding accurate information is a skill unto itself, and the popularity of AI demonstrates not only the general public’s lack of this skill, but also the lack of care for accuracy in information. This is not surprising, as well demonstrated by the hordes of people in this country who believe the truth is what Trump says, but AI only gives said people more confidence in their misinformation and validates the audacity of the MAGA class.
I have played around with Chat-GPT multiple times before; I have occasionally used it for assistance, namely for practicing job interviews and as a phrase thesaurus. However, I do not use it as a search engine. I’ve done testing with its accuracy in multiple fields: physics, language (Chinese, Japanese, Ancient Egyptian), and elite gymnastics scoring/results. While it is often 90% accurate, it does hallucinate information, particularly depending on how the user asks their questions. You can convince AI of false information. After all, contrary to public belief, AI is not a magical being. AI is an evolution of neural networks. Neural networks are an evolution of machine learning. Machine learning is an evolution of statistical computing. AI at its core is a large complex meshwork of matrices and matrix operations, much of which remains a black box (although, I did hear of recent developments in better understanding this black box).
However, my interest lies not so much with the mathematics and history/future of AI itself, but rather its impact on the general public’s relationship with information. It supports a growing trend towards the “truth” being an amorphous blob that one can willfully manipulate. This is extremely dangerous when combined with the devaluation of education and intellectualism in the US. But I don’t think AI started this trend. Rather, as society continues to rapidly progress in technology, a public that isn’t anymore educated becomes more and more detached from the systems and technologies that are being implemented into society. In order to understand the latest developments in a particular field, one likely needs to obtain a PhD or at least some form of higher education (majority of Americans do not have a bachelors) to even begin understanding those developments, and that is just for that field. Back in the late 19th century, physics undergraduates spent their entire years solving complex circuit problems that we now fly through in a single semester. Today, becoming a physicist requires learning quantum mechanics. With the duration of degree completion being no longer than it used to be (i.e. 4 years on average to obtain a bachelors), there isn’t as much time to linger with and absorb the fundamentals. I don’t currently have a solution to this issue, as there is an inherent need to quicken the learning process if we want to create thinkers who can work on the latest problems, and because technology has made learning more efficient than ever before, but I believe this to be a large contributing factor to the increasing education gap between academics and non-academics. The world is becoming more and more complex and specialized, and while that is great for technological developments, it also means the average person does not understand the world around them. The average person does not understand how standard kitchen appliances operate, but they believe they’re able to speak on complex subject matter after a couple of Tik Toks. Armed with an AI script there’s a startling comfortability with becoming an “educator” on any matter of subjects. As a result, those who desire to understand the world but don’t know how find themselves wrapped up in conspiracy theories. See “seed oils” or even “flat earth”. These are desperate (and unfortunately consequential/dangerous) attempts to explain the world we live in. The average American does not understand how to find/verify correct information. The truth is not based on logic or principles, but whatever supports a pre-existing worldview muddled with political beliefs and a child’s understanding of scientific phenomena. The definition of truth is being changed in the eyes of the American people from fact to belief.
When confronting such claims, the most powerful questions you can ask are “how?” and “why?”. The misinformer may be able to provide one statement to answer that question, but ask again regarding said statement, and most arguments will immediately crumble. When claims are not based on fundamental understanding of a topic, they have no foundation to stand on. This leads to argument instead of debate and is ultimately the death of educated conversation in America.
No one is capable of knowing everything, a fact that those of us with the fewest number of brain cells have the hardest time accepting. While personally I try to develop a baseline understanding of subjects which I deem important, I also feel comfortable listening to experts on subjects I am not familiar with. If what experts are saying feels questionable to me, I might feel motivated to further investigate the field. Ultimately, I do tend to trust what academics say, despite being well aware of the multitudinous incompetencies in academia, and the danger that arises when corporate interests interfere with research. There is no objectively correct way to approach this nuance: finding the truth is a matter of trial and error and pattern recognition. The more one learns, the more they can cross-reference new information with their existing understanding of the world/mental database. However, this requires fundamental understanding of various fields (hence the importance of the existence of standardized schooling) in order to be done effectively. It should be unsurprising as to why Trump is calling for the Department of Education to be completely dismantled.
The best way for the general public to fight back is not through conspiracy theories, nor an overreliance on non-infallible AI. Rather, it is through real education and learning. With the current state of the US, I don’t have a specific plan on how to achieve this, but I do know that the sentiments of AI supporting tweets I have seen need to be refuted. AI does have genuine use; a commonly cited usage is identification of tumors in medical imaging. Image pattern recognition is an area where AI performs better than more traditional methods of image analysis. However, these AI chatbots that companies have thrust onto the general public is an effort to publicize and normalize the technology. Firstly, this makes the public become comfortable with its existence on all of our platforms as it continuously gathers our data to become each of our perfectly individualized mini-salesmen. Secondly, it makes the public reliant on AI, and then willing to pay for the service once these chatbots become locked behind a monthly subscription (which is already starting to occur). It is not a free gift bestowed to us by tech bros of our country, allowing any of us to become “experts” in any field as some AI bootlickers would have you believe. Nor was the ability to become competent in any field something that was guarded from the public. Anyone with enough time and motivation has the ability to become decent at drawing or semi-knowledgeable in the history of a particular country thanks to the existence of the internet. Becoming good at things requires focus and effort; that these were traits that certain people lacked does not indicate that “elitists” in various disciplines were gatekeeping their respective fields. I will save discussion of AI “art” for another thinkpiece.
Blame this opinion on my Chinese upbringing, but good public education needs to be heavily prioritized in this country. Its relative scarcity in the US has left the average American unable to think beyond their immediate surroundings and the immediate present. As such, a tool like AI, which seemingly answers all of one’s questions in a human-like way, presents a miracle cure to this ignorance. Somehow, the messaging format of AI, and its Turing test-passing style of speech, has convinced many that a trustworthy human of infinite wisdom lies on the other side of the screen. Just as one cannot trust all (frankly, many) sources on Google, AI also cannot be completely trusted, with the additional caveat that no authors or sources are cited, making it impossible to determine the origin of the information, one of the primary ways of vetting information. But it is likely this very fact, the apparent lack of an author, that is attractive to people. A person can be scrutinized and checked for political affiliations. AI is seemingly “unbiased.” Of course this is completely ignoring the fact that AI’s abilities are completely based on human-written text. This lack of understanding of the difference between a computer performing a mathematical calculation, and an AI generating text, is the foundation of this misguided trust.
To narrow in on my emphasis on public schooling, much of my support for public education stems from a belief in an educated public as essential to the success of America. This goal faces trials via the inequality exacerbated by private schooling and the deregulation of home and charter schooling. Properly placed criticism of public schooling in the US frequently leads to endorsement of private/charter/homeschooling; we need to instead invest in public education. It is bad enough already that certain schools in Texas are allowed to teach creationism as an “alternate theory” to evolution, or that red states are cracking down on teaching the cruel realities of U.S. history. Beyond these obvious talking points a particular peeve of mine regards the debate surrounding what ought to be taught in schools, especially at the high school level. You have likely seen social media posts stating something along the lines of , “Why did they teach us how to analyze books and solve math equations when I don’t use those as an adult? They should teach kids how to file their taxes.” While I have no qualms with home economics classes (which cover a semester, not four years of high school), these sorts of sentiments feel particularly anti-intellectual. With the advent of AI, those dispassionate for critical thinking now have further reason to see schooling as unnecessary. How are we supposed to develop innovators and thought leaders if we feel that nothing beyond the basic skills required to exist as an adult are taught? I find this mentality even more confounding considering the overlap of people with this sentiment and those who want us to beat China in technology and innovation. They want practical skills to be taught but fail to comprehend critical thinking and creative thought as essential to adult life. As the world becomes more reliant on AI, those who are still able to think through problems on their own will become an even more scarce resource.
The Flynn Effect is the theory that intelligence increases with each generation due to a variety of factors such as increased access to education, better educated parents, access to a nutritious diet etc. Average IQ scores have increased by 3-5 points per decade since 1932, but new research shows a reverse trend in the Flynn Effect (I am aware of the numerous issues with IQ, but there are other trends that point to the effect). In the age of technology the world is getting dumber. AI without proper application becomes a handicap for a nation insistent on becoming as brain dead as possible. The short sightedness of dubious information on demand could lead to a troubling future for Gen-Alpha, as we are already witnessing a generation raised on iPads, but more specifically instant gratification and weaponized incompetency in schools.
AI is a shiny new corporate toy taking precedence over the long term education and thoughtfulness of the American people. AI is being pushed out boorishly into every sector of our technological lives. The sentiment is always “we’ll fix it later,” but later never comes. While the corporations see this as short term profit gains, with unfortunate long term consequences, the right sees this as an opportunity to attain its next generation of mindless followers. Being able to hold well-reasoned opinions, engage in nuanced conversations, and critically think about information that is presented directly counteracts the work of the MAGA right. As mentioned, there are genuine applications for AI that will allow us to expedite and optimize tasks in a large variety of fields. However, it is concerning to witness it being used to entirely replace human thought and creativity, a task it is not designed to properly handle. Such technology should not have been thrust onto the public so flippantly — the fact that it was only demonstrates the eagerness of tech giants to integrate AI into people’s daily lives, disregarding consent.
The law is the minimum accepted level of virtue required to exist within our society. The solution relies on an overhaul of the American values system and restrictions on late stage capitalism. It requires a care for others that no longer exists in our hyper individualized and intentional isolating society. We need to place restrictions on AI (a la China’s gaming restrictions for minors, i.e. curfews, time limits and child friendly censoring) and give the necessary resources to public education to refocus on fostering independent thought and the ability to discern truth from falsehood.
Written by Chris Liu & Edited by Hailey Espinosa