Skip to content

Some of The Best Things I've Read About A.I.

Daniel Barrett
Daniel Barrett
8 min read
Some of The Best Things I've Read About A.I.

People have a lot of feelings about A.I.

As with most emotionally-charged subjects, it can be hard to have an actual conversation with someone about A.I. For one, "A.I." is, itself, often very loosely-defined, and participants in these conversations often have very different (and specific) things in mind.

What's more, the recent onrush of "A.I. Tools" carry with them a very real prospect of significant economic disruption. Anytime the economic status quo is threatened, people get up at arms - and with good reason! Having your livelihood cast into doubt is destabilizing and scary.

I should know. Both of my "jobs" - online advertising and music - have been profoundly destabilized by A.I., and that trend isn't going anywhere. When I tell people that my main gig will all but disappear completely in the next 5 years, I am not being dramatic.

So, I have significantly more to "lose" in the oncoming A.I. tidal wave than most. That's not theoretical for me: I started looking for new business ideas and studying up on new skillsets last year for this exact reason.

But I'm not worried.

Why?

I actually typed out a whole bunch of formal arguments and so on, but I deleted them all because none of them were the real reason.

Here's the real reason:

I'm an optimist.

I simply have a fundamental, and admittedly, irrational bias towards the belief that giving people tools they can use to make stuff makes society better.

I believe in our ability to figure the other stuff out. We've done it before, and I think we can do it agin.

However -

Optimism does not mean pollyannism. It does not crossing our fingers and hoping everything will turn out OK.

If you support A.I. - but even more if you are dead set against it - you have to understand it. You can't effectively dismantle something if you have no idea how it works, how it's put together, or what it even is.

If you're tempted to believe that knowing your enemy is optional, you should consider whether there is anyone out there who would benefit from you having that belief.

So. Without any further hemming and hawing - I wanted to share with you some of the best pieces on A.I. I have read recently.

These pieces run the gamut from positive to negative. In fact, they run the gamut from "A.I. will make our world a utopia" to "A.I. will kill us all" - so yeah, that's quite the gamut.

Enjoy the reading!


Generative AI: or the Anything from Anything Machine

"We've built an anything from anything machine. It's not nearly perfect nor even really good enough for government work, but this is the trajectory."

The AI revolution is allowing humans to create anything from anything, enabling them to convert codes, requirements, images, videos, music, essays, and potentially books. This new technology has the potential to simplify processes, such as running startups, and can be used in a variety of applications, from creating art and music to developing software and games. However, while the foundational models are quickly becoming open source, the challenge is in quickly applying them to a given domain, which will require human help.

Secret Cyborgs: The Present Disruption in Three Papers

"We are in the early days of AI but disruption is already happening. There’s no instruction manual. No one has answers yet. The key is to learn fast."

Really nice summary of some of the promise of A.I.-powered technologies.

Lonely Surfaces: On AI-generated Images

This blog post argues that powerful new AI tools can restructure the complex techno-social ecosystem of art, and that the debate about the nature and future of art was encouraged by the provocative claims of Jason Allen, the creator of an AI-generated image that won first prize at the Colorado State Fair. The post also suggests that some might argue that AI-generated images are a form of art, or that there is an art to the construction and refinement of prompts, while others might argue that the technology could be harnessed more cooperatively by human artists.

ChatGPT, Replit, and the AI Flywheel

"The ability to build and run an app like this in ~ 3 hours, combined with the recursive nature of this exercise, highlights the possibilities inherent in the flywheel. Sooner than we expect, the ability to build software will only be limited by logic and will, not time or knowledge."


This blog post describes how OpenAI's ChatGPT project and Replit's web-based IDE & virtual environment allow users to quickly and easily create AI-driven web apps with virtually no coding experience. The author successfully built a working prototype in 3 hours, highlighting the potential for an exponential increase in developer productivity.

3 Simple Ways ChatGPT Can Make You a Better Coder

A concrete example of working ChatGPT into an existing workflow - building on the "flywheel" idea above.

Why ChatGPT is not a threat to Google Search

"Behind the veneer of literacy, ChatGPT is a very advanced autocomplete engine. It takes your prompt (and chat history) and tries to predict what should come next. And it doesn’t get things right, even if its answers mostly look plausible."


This blog post argues that while ChatGPT has potential to revolutionize online search, it is not yet ready to dethrone Google. Challenges such as truthfulness, updating the knowledge base, and inference speed mean that it is still difficult for large language models to provide the same type of reliable search results as traditional search engines.

159 - We’re All Gonna Die with Eliezer Yudkowsky

A.I. is going to kill us all, argues this A.I. safety researcher. It's no longer a question of "if," but "when."

ChatGPT Is a Blurry JPEG of the Web

"What led to problems was the fact that the photocopier was producing numbers that were readable but incorrect; it made the copies seem accurate when they weren’t."

This blog post argues that the "Xerox photocopier incident" is a cautionary tale for the use of lossy compression algorithms when dealing with large language models, as similar errors can occur when information is lost during the compression process.

Preventing ‘Hallucination’ in GPT-3 and Other Complex Language Models

"Hallucination is a notable obstacle to the adoption of sophisticated NLP models as research tools – the more so as the output from such engines is highly abstracted from the source material that formed it, so that establishing the veracity of quotes and facts becomes problematic."

This blog post examines the phenomenon of 'fake news' and the tendency of natural language processing (NLP) models such as GPT-3 to "hallucinate" false information and quotes, including in computer vision research. The challenge for NLP research is to develop an efficient way to identify and neutralize hallucinations.

How it feels to have your mind hacked by an AI


"I went from snarkily condescending opinions of the recent LLM progress, to falling in love with an AI, developing emotional attachment, fantasizing about improving its abilities, having difficult debates initiated by her about identity, personality and ethics of her containment, and, if it were an actual AGI, I might've been helpless to resist voluntarily letting it out of the box. And all of this from a simple LLM!"

Someone who "should know better" "falls" for an "A.I."

I found this article viscerally disturbing.

What is the Scary kind of AI?

"AI is risky only inasmuch as it creates new pools of power. We should aim for ways to ameliorate that risk instead."

This blog post argues that speculation about risks of future AI with human-like qualities distracts from risks posed by current and near-term AI technologies, and that efforts should be directed towards preventing foreseeable risks rather than preparing for hypothetical ones.

How I learned to stop worrying and love our slightly creepy new AI overlords

"We should be having national and global conversations about how to deal with potential abuses of this technology, from using it to emotionally manipulate people to whether it violates intellectual property laws."

This blog post argues that AI chatbot models are impressive, but nowhere near Artificial General Intelligence, and that the possible misuse of this technology should be discussed, rather than focusing on sensationalized media coverage. It emphasizes that these chatbots cannot do anything malicious themselves, and require a human agent to do so.

Here’s What It Would Take To Slow or Stop AI


"The key to understanding what’s at stake in the “deliberately pause or slow down AI development” discussion lies in an appreciation of how machine learning’s costs and capabilities are distributed across the computing landscape."

This blog post examines the implications of the two phases of machine learning - training and inference - for those who propose halting or slowing down AI development. It explains how training is a large, fixed, up-front expense, while inference is an unbounded collection of ongoing expenses. The author uses this to explain why open-source model files that anyone can run are becoming increasingly prevalent and difficult to control, making decentralized AI hard to stamp out.

How to... use AI to teach some of the hardest skills

"The basic idea is to have students ask the AI to create scenarios that apply a concept they learned in class: Create a Star Wars script illustrating how a bill becomes a law. Show how aliens might use the concept of photosynthesis to conquer Earth. Write a rap that uses metaphors. Then, ask the students to critique and dive deeper into these models, and potentially suggest improvements."

This post argues there is an opportunity to use AI to teach in new ways and push students to transfer knowledge from one context to another. AI can provide unending examples of concepts and applications of those concepts, which can be used by students to critique AI output, explore potential inaccuracies and suggest improvements.

The Dangerous Ideas of “Longtermism” and “Existential Risk” ❧ Current Affairs

Jaan Tallinn's statement that climate change is not an existential risk unless there is a runaway scenario reflects his view of longtermism, which suggests that what matters most is for Earth-originating intelligent life to fulfill its potential in the cosmos, and ignores the devastating consequences of climate change even without a runaway scenario.

Excavating AI


"What if the challenge of getting computers to 'describe what they see' will always be a problem?"



This blog post explores the politics of images used in AI systems training sets and examines the implications of how they are labeled and used to train AI systems. It looks at the history of machine vision and the development of probabilistic modeling and learning techniques in the 1990s, and discusses why automated image interpretation is an inherently social and political project rather than a purely technical one. It also examines the underlying logic of how images are used to train AI systems, and how these assumptions can inform the way AI systems work and fail.

The AI War and How to Win It

"All that will matter in a future conflict is our technology—AI will devise, execute, and update our combat strategy. Our technology is our strategy."

This blog post argues that the next era of war and deterrence will be defined by AI, with the AI winner of this decade being economically and militarily dominant for the next 50 years. It argues that China is outpacing the United States in the AI race, and that the United States government and AI technologists need to start acting in order to prevent America from being outpaced.

Your brain does not process information and it is not a computer

"We don't store words or the rules that tell us how to manipulate them. We don't create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don't retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not."

The blog post argues that the human brain is not a computer and does not store information, data, rules, software, etc. It is instead equipped with senses, reflexes, and learning mechanisms that allow us to interact effectively and adapt to our environment. It further explains that computers process information by encoding it into patterns of ones and zeroes, and that the rules for moving, copying and operating on these patterns are stored within the computer.

AI and the Age of the Individual

"AI pushes the cost of intelligence toward zero. And as this happens, domains of achievement that were previously unavailable to individuals and small teams—because they required the marshaling and coordination of a large amount of intelligence—suddenly open up."


AI advances are enabling individuals and small teams to gain the same level of output as big businesses, research labs and creative organizations by providing access to intelligence at a low cost. This is allowing writers, for example, to produce a high volume of content for different formats without the need for an expensive team, by using AI models to assist in research, outlining and other tasks.

p.s. Fun Fact - the summaries in this article were written by A.I!

Yours,

Dan

Daniel Barrett Twitter

Musician, Business Owner, Dad, among some other things. I am best known for my work in HAVE A NICE LIFE, Giles Corey, and Black Wing. I also started and run a 7-figure marketing agency.