You are a builder. Can AI replace you? - Mixpanel
You are a builder. Can AI replace you?
Product Foundations

You are a builder. Can AI replace you?

Last edited: Apr 12, 2024 Published: Apr 12, 2024

We asked an AI researcher about the growing agency of artificial intelligence … or just whether Chat-GPT is going to begin stealing product developers’ jobs.

Daniel Bean Managing Editor @ Mixpanel

You are a builder. And what makes you a builder is not just your ability to code, sketch an illustration, or write copy for a marketing campaign; it’s that you handle everything involved in taking an idea to a finished product. It’s agonizing over moving targets, considering new data, and even changing your mind on what you think is good.

Building is decision-making. As it happens, this is an area where artificial intelligence is advancing quickly.

Today, AI tools have become a powerful aid in helping technical and non-technical builders with those aforementioned tasks of coding, illustrating, writing copy, and the like. But how far off are we from this technology no longer needing us humans to guide it in the building process? Is AI capable of thinking like a product manager and running a product dev project from zero to shipped? I was excited to get in touch with artificial intelligence and neuroscience researcher Ian Eisenberg to pick his (human) brain about this.

Ian should have no problem recognizing a builder when he sees one: Aside from his day job as Head of AI Governance Research at Credo AI, he founded the Ai Salon, a community focused on AI’s future impact that frequently hears from startup founders and other folks creating new AI products or ventures.

Read my full chat with Ian below, where we cover how AI’s “magic” thinking and learning actually work, what the next steps for advancement in generative AI will be, and how those of us who are hoping to keep our jobs as digital builders should be feeling right now.

How do you define artificial intelligence?

Different people respond to that phrase in different ways. My first exposure to anything AI-related came with brain modeling, modeling of intelligence, and machine learning bots. I specifically remember being amazed by DQN, which was DeepMind’s Atari-playing bot. It was an example of some simple algorithms that had been outlined at least since the ‘90s scaled up to perform pretty remarkable feats of intelligence. They were able to have a goal, take in information, and accomplish the goal and learn.

So I think learning is an important part of defining AI. Even though most of today’s most prominent AI models aren’t being trained continuously from new foundational data, they do “learn” or adapt to new context that comes in from user prompts. That’s why their behavior can change in unpredictable ways as you get deeper into a conversation.

Let’s talk about those generative AI systems of today—things like Chat-GPT, Google Gemini, or Midjourney. Has their emergence been surprising and exciting to AI vets like yourself?

Everyone that I know who works in AI has both of those feelings at the same time.

When Chat-GPT was released, many of us had been aware of GPT-3 and played with that. So there was a little feeling of “I don’t understand why this is such a big thing.” But ultimately we all wound up being surprised at how effective RLHF—reinforcement learning from human feedback—worked and how important the UI changes were in popularizing the AI system. Those advancements made this model much more usable. It seemed to get what most people wanted more easily. And so yeah, that was a blow-away moment. And even people who are continuously trying to update their expectations so that they won’t be surprised still are as advancements continuously accelerate. Sora was remarkable, and models do better on performance benchmarks faster than people could have expected, for two more examples.

Yet, I’m assuming we haven’t scratched the surface of what’s to come for AI. What are the next steps forward?

The most obvious way forward is outlined in the scaling laws for LLMs that were introduced in recent years. They essentially say that when you put more compute and more data into these general-purpose systems, you get predictable improvements in performance. That proven predictability has made it so that billions of dollars can be invested securely toward building a much better model. And they will be.

But there’s another route to advancement. If you froze the quality of foundation models today and we just had people playing around with building new agents, using more expansive and more costly prompt engineering strategies, that would still result in huge performance increases—potentially for cheaper.

“I don’t know how you would want to define thinking in a way that excludes what AI is doing: It’s converting previous context in a very complicated way, taking some input, and outputting something reasonable.”

Because, remember, these systems learn from prompts, at inference time, and prompting a model is continuously getting cheaper. So it’s less expensive and faster than training these models on new foundational data. You could do as complicated a prompt chain as you like. And this is hugely important for people building off of foundation models. You’re not always going to fine-tune these systems, but better performance, particularly on challenging tasks involving reasoning, are going to benefit from strategies like chain-of-thought, multiple samples, and automated prompt optimization.

So I think prompt engineering will be the faster step to getting AI to the next level.

What are the human-like behaviors that today’s AI agents are most closely replicating? You mentioned “learning.” Does AI “think”? How does that actually work?

There’s work in sensory domains that seems to show the representations that convolutional neural networks learn are similar to the representations developed in the brain, say, in the first 50 milliseconds of seeing an image. This doesn’t mean the AI is doing the same thing as human brains are when we see an image, but it’s suggestive. Does that representational similarity extend to more complicated behaviors like thinking? I doubt it, but I generally wouldn’t want to define “thinking” based on the particular implementation anyway. Thinking is a behavior—a process of reasoning—that can be potentially achieved by many processes.

So I’m firmly on the side of believing it’s thinking. I don’t know how you would want to define thinking in a way that excludes what AI is doing: It’s converting previous context in a very complicated way, taking some input, and outputting something reasonable. And If you give it more compute during inference time, it will do better. If you ask it to copyedit some writing, for example, it might give some good notes. But oftentimes, you can ask it to take another pass and it’ll give even better notes or suggestions the second time. This seems very similar to giving the AI “more time to think” and getting better outputs.

Can AI innovate?

Some people believe that AI systems can’t innovate, that they just regurgitate whatever human data has come in. I definitely don’t think this is true, and it’s just another example of people trying to create some arbitrary line for some function that is distinctly “human.”

Let’s start with an example I think is unassailable: DeepMind’s Alpha models that learned to play Go. The first model, AlphaGo, was initially trained on human data before moving on to “self-play,” where the model played against itself and continued to get better. But the next model, AlphaZero, was never exposed to human data; it only played against itself, and this model was even better at the game. Clearly this model was able to be creative and discover effective strategies. Moreover, there is the famous “move 78” which showed a kind of strategy that human experts wouldn’t have picked up on.

“I feel like engineering and articulating a system should still be hard jobs to hand to AI, but simply coding may not be a relevant human job.”

I think it’s clear that that kind of system is innovative. But that system is a reinforcement learning system that discovers new strategies based on an objective function. Nowadays, when people ask if AI can innovate, they’re really asking whether generative AI systems like chatGPT can innovate. Besides the fact that these models will clearly continue to get better and incorporate reasoning approaches like tree search as AlphaGo did, I’d argue that even the current systems are innovative. Amongst the multitudes of ideas humanity has already come up with, there are connections that have never before been made. “Interpolating” between different ideas is a form of innovation. It’s not thinking completely outside of the box, but when the box is all of humanity’s creativity to date, there’s a lot undiscovered in the box.

If I were an “ideas” person with no hard skills, how far could I get today by “hiring” AI to build some digital product I’ve thought up?

We’ve seen some of this with GPT-4. You can ask it for the steps to build a certain kind of website and it will generate the steps and the code. If you don’t have any technical skills, you’d still have to get someone to put it into VS Code. One recent advancement there is a GPT-4 agent called Devin that will go a bit further and generate right in the code editor and other tools to come even closer to getting a barebones website up and running. But gauging from what I’ve seen, with either one of these options you’d still need someone with good programming knowledge to make fixes to get it right.

I doubt for a decent amount of time that you’re not going to want people who are great designers, great engineers, great whatever, working alongside AI systems.

So maybe AI is a builder you could hire … but it’s only a novice-level builder who needs a lot of micromanaging from experienced humans.

I like that analogy. The way this would usually work is you go hire people, tell them in some form what you want, and then tell them to “do magic,” right? And then they come back and they have the output. If they aren’t people, or in this case systems, who can get the job done well and as you wanted, then you still need a more experienced technical manager to work with them. So if you’re expecting to “hire” AI to replace the work of an engineer today, you’re going to wind up in a failure of a situation. Over time the amount of micromanaging will lessen and you will be able to trust the AI systems with larger scopes of work with more ambiguity.

“I doubt for a decent amount of time that you’re not going to want people who are great designers, great engineers, great whatever, working alongside AI systems.”

Will that be much different in the near future? Do we expect these systems to become so much better that they could replace higher-level builders?

Will software engineering be around in 40 years? I don’t know. I really kind of doubt that coding will be an important skill set. I feel like engineering and articulating a system should still be hard jobs to hand to AI, but simply coding may not be a relevant human job. And I can’t say I’m sure about some software engineering work.

But I don’t want to equate software engineering with “building,” though. If things go well with AI, the barrier to building will no longer be your technical knowledge in interacting with code. The skillset will be different, and we could see a vast ecosystem of smaller ventures with more narrow customer bases due to the lower upstart cost. If the small team of the future can create a product that with the polish of current larger startups, we should expect the market to support narrower problems solved. Think about YouTube. Thanks to the lower cost of distribution and creation, we have many more creators who can serve narrower niches. I don’t know if things will definitely go this way—there are many uncertainties and I’m really not an unbridled optimist—but it’s a nice future to dream about.

So your advice to digital builders today is upskill? Grunt work will soon be outsourced to the robots?

You might not be replaced by AI, but you’ll definitely be replaced by someone who’s using AI. So you should learn how to use AI. That’s definitely the right advice for the moment for anyone.

If we think of AI today as not a role replacer but a role augmenter, it becomes way more powerful. You just have to measure what you can do by yourself against what you’d be able to do if you were empowered by an endless bunch of, let’s say, interns. You could have interns who are good at a lot of things, here to serve the function of your own creativity. So you should get good at this tool.

This interview was edited for clarity and brevity.


Gain insights into how best to convert, engage, and retain your users with Mixpanel’s powerful, self-serve analytics. Try it for free.

Get the latest from Mixpanel
This field is required.