This tweet inspired me to follow up on the piece about AI art and writing that I published yesterday. I couldn’t agree more.

In 2011, a venture capitalist wrote that software was eating the world. More than a decade on, I think it’s fair to say that today, the technology eating the world is artificial intelligence. From the algorithms that amplify hate to the chatbots that mimic writing to the deepfakes that manipulate news, machine learning is powerful and dangerous in so many ways, threatening to upend everything.

Right?

It might seem funny to hear someone with two degrees in computer science, one more-or-less directly focused on artificial intelligence, decrying its dangers. It might seem even more ridiculous because I still work in the machine learning field every day.

How can I believe it’s so dangerous while also spending 40+ hours a week on it?

Because, fundamentally, I believe that machine learning has the power to change the world for the better just as much as it does to destroy things. Maybe I’m naive; maybe I’m an optimist. But most people who know me wouldn’t use those words to describe me—if anything, I’m usually described as cynical.

I think the central problem, and the immense tragedy, of machine learning and artificial intelligence today is what it’s being used for. Like early Facebook employee Jeff Hammerbacher, I’m long on AI in theory but short on its actual prospects, because, “The best minds of my generation are thinking about how to make people click ads”—and how to keep eyeballs on a screen or how to automate things that really don’t need automation.

I’d have to get into the weeds about the technology behind AI more than I can in a blogpost to really explain why this is so often the case, but the basic premise is simple: the feedback loop is closed. Machine learning problems need to have a clear target and a clear metric, something that can be expressed mathematically to provide feedback to the model as it trains. People can be immensely clever about how to design these systems, with proxy targets if the real goal is undefined and tricky exploitation of byproducts that results from an easier goal. But at the end of the day, goals like “mimic human writing” or “keep the user involved longer” are not terribly hard to express.

That’s not to say building a model to do these things is easy! Such projects require immense amounts of data and thousands of hours of compute time. Breakthroughs have come through innovations in model design and even hardware design. Skilled practitioners command high salaries because this work is often extremely difficult.

It’s also not to say that problems that are useful for the world are never tackled. Machine learning has a clear role to play in drug development, in disaster modeling, and myriad other areas. But those are not the easiest or the most lucrative. In recent years, Silicon Valley has been dominated by companies that have software products, because software is cheaper than hardware; the same is true of AI companies. If your data is already digital and your product is going to remain in a computer, all you have to do is build the model and write the code. Even when that is a very, very hard thing to do, it’s much easier than making a physical thing, as well. And investors love things that are “easy” in this sense—because that’s where you can get a ten- or hundred-fold return in a few years.

But that’s not where we, as humans, need AI to help us. That’s not where machine learning models could fill a role that humans simply can’t, or where human work doesn’t scale—where adding AI will solve problems without creating new, planet-threatening ones. AI is in some sense value-agnostic, like most inventions. But, also like most inventions, it’s very likely to end up causing tons of harm precisely because it’s created and controlled by humans.

It turns out humans are pretty rife with problems.

I’m not writing this blogpost to champion the work I do all day (because if I started doing that, I’d never stop). But my job is illustrative of machine learning’s potential, and what its practitioners could do, if freed from the strictures of easy problems. At the company where I work, we are trying to scale the human ability to measure the amount of carbon contained in forests, and also predict other things about these forests, like the chance of a forest being degraded or deforested. These are all gnarly problems, which are not easy to tackle or frame. We can’t just scrape the entire web and get a wonderful massive dataset. Every bit of data we have comes from experts measuring and identifying trees, often in remote forests that are hard to reach. We can’t design a simple quick feedback loop—the metric of success isn’t whether, on average, users spend a few more microseconds looking at ads, but whether, on average, the trees we try to protect are actually protected over years and decades. It’s a much harder problem!

But if—when!—we get everything working as we hope we will, the end result won’t be that a tech giant makes billions and a few democracies slide toward fascism, or that some tech workers get bonuses and a bunch of artists are plagiarized and earn less. It’ll be that we have more forests than we would otherwise (and indigenous communities have more resources, and rural people have new income streams, and water is cleaner, and biodiversity increases, and all of us have a bit more oxygen and a bit less carbon dioxide in the air we breathe).

It’s a beautiful vision. Obviously much more than machine learning has to go right. Obviously things could still go wrong! But the goal isn’t just to make a company money, and the ethical considerations are taken at every turn.

It’s not simple; it’s not solved; it’s not just a matter of more data and more compute resources. And it doesn’t promise order-of-magnitude returns to venture capitalists in a few years.

We’re not alone. There are other benevolent uses of AI out there. Everyone I knew who went into machine learning in college was entranced by the potential, the beauty in the math of it all and also the possibility to make breakthroughs and make the world better. But it’s easy in Silicon Valley to be drawn into the easier problems, the power of massive datasets and companies who will give you all the resources imaginable to train every model you can dream up.

So, yes, of course I’m troubled by everything terrible AI has wrought. But I’m also saddened by all the many hours of human and computer time could have been spent differently. I can’t even imagine what we might be able to accomplish if we were unshackled by the need to always make companies and investors more money, if research could be applied broadly for good, not just for things that are, at best, value neutral, and at worst quite bad.

The greatest minds of my generation are using math and data to corrupt the world. They could be using math and data to save it.

(All thoughts are my own and do not reflect the opinions of my employer)