Product & Growth Tips

Frank Lessons on AI from the Developer of the World’s First Hotdog Recognizer

Theresa Carper

Speculation about the future of artificial intelligence divides largely into two camps.

The alarmist corner believes our technical advances risk the extermination of humanity by increasingly intelligent machines. This fear is fueled by the bad-boy of tech, Elon Musk, who’s calling upon the government to regulate artificial intelligence, or in his words, “our biggest existential threat.” Yikes!

The opposing side cites the victories AI has already won in health care, transportation, and daily-life conveniences. Just be careful what you build, they say, and we can make the smart world an egalitarian reality. Bright-eyed Mark Zuckerburg champions this optimism as he tucks a daisy behind the ear of Jarvis, the AI butler he built to run his home.

Any time these great minds feud on Twitter, the dichotomy only expands further.

For the sake of enlightening the middle ground, I’d like to introduce a third contender to the debate.

“I’m just the guy who built a hotdog app.”

In addition to the self-proclaimed title above, Tim Anglade is a developer, a product strategy veteran, and an Executive in Residence at Scale Venture Partners, which specializes in marketing and mobile.

Tim first became fascinated with artificial intelligence as a developer and hobbyist. He was focused on AI while working at Scale Venture Partners when a presentation by the TensorFlow team ignited his interest. Excited by the capabilities of open-source machine learning, Tim expanded his thinking to a more strategic, product-oriented approach and began taking classes.

He was also a consultant on season four of Silicon Valley, a show you’re likely familiar with if you’re reading this article (and if you’re not, I recommend a deep dive through the whole series on HBO Now or HBO Go). In episode four, budding entrepreneur Jian Yang wants to develop an app that gives instructions for cooking his grandmother’s octopus dishes. In a classic misunderstanding, incubator admin Erlich Bachman becomes his lead investor believing it has something to do with Oculus, the VR platform from Facebook. When he realizes his mistake, Erlich pivots the app mid-pitch to “Shazam for food.” Disgruntled Jian Yang does develop an app that can recognize food, but hones in on a very, very specific use case: it can only identify hotdogs and “not hotdogs.”

Typical to their comedic stylings, the Silicon Valley writers took the joke a step further. As the on-staff mobile expert with an interest in exploring AI, Tim signed on to develop and release the real app. Not Hotdog was born.

This is the story of what Tim learned about while building an AI-driven product, and how it quickly became more than just a hotdog recognizer.

“Think very carefully about what people are actually going to do with your artificial intelligence.”

Any product person will tell you that understanding user behavior is fundamental to the development of a product. Not Hotdog is no different.

Not Hotdog is trained to recognize images using a neural network, which is modeled on the web of neurons in the human brain. When it comes to technology that seeks to mimic human intelligence, closing the gap between the mechanics and intuitive user function is critical.

Feed a neural network enough photos of a hotdog, and it can learn to recognize a hotdog. But it is what it eats, so be careful.

For instance, early in the process of building the app, the team began to realize that most people would take closely cropped pictures of their food (hotdog or not). Very few users would take shots of someone actually eating a hotdog. Including these wide-shots in the dataset generalized the network in a way that wasn’t beneficial.

Tim shared an example of this: “The network would assume, for example, that any picture taken in a baseball stadium was of a hotdog. Because there was a lot of pictures of people holding hotdogs in baseball stadiums, it would draw a correlation that wasn’t correct. There wasn’t really a need for us to train on anything except cropped pictures of food.”

In a field as potentially devastating as artificial intelligence, Tim stresses the importance of user expectations for product development—especially when this comes up against your own blind spots.

“There are many different interpretations you can have of a single problem.”

In food–as in everything else–humans attempt to create forms that are useful and universal.

A hotdog, according to Wikipedia, is “a cooked sausage, traditionally grilled or steamed and served in a partially sliced bun.” Easy enough. But a scroll through the “hotdog variations” page proves the definition is far more complicated. The US alone lists 19 regional recipes.

“Hotdogs” get even more colorful around the world. You can eat a hotdog topped with a quail’s egg in Colombia, or a hotdog in a crepe with sweet chili sauce in Thailand, or a hotdog punched inside a roll in the Czech Republic.

“Something as simple as a hotdog ends up being culturally complex in a neural network,” Tim explained.

This complexity—determining how high to set the confidence threshold for hotdog recognition—poses an ancient thought experiment to a neural network. Theseus’ paradox questions whether an object that has had all its parts replaced remains fundamentally the same object. Consider it the earliest image classifier problem. Is an Alaskan caribou-dog a hotdog? Or a Taiwanese hotdog that swaps the bun for a stick? What about the South American salami and chips substitute?

Not Hotdog creates a sort of digital trapdoor in this philosophical query. The machine brain gives an absolute answer to whether an object is a hotdog. But since the identifier is powered by examples provided to its neural network, this “absolute” originates from the subjective criteria of whatever Tim and his team considered a hotdog.

They could feed the dataset more varied examples to reflect the hotdog’s true diversity in form, but they’d risk muddling its identification abilities. For instance, it would have to learn to differentiate between Japanese hotdogs, often cut to resemble an octopus, from the actual animal (and lest we forget the potential confusion with Oculus).  

Or, they could train the app with a more focused dataset, loading it with select examples so it could recognize a narrower definition of hotdog with better accuracy.

For the sake of dataset simplicity, they decided not every dog could have its day. They stripped it down to its most basic (American) form: sausage, bun, ketchup and/or mustard. In the end, Tim and his team made the same choice as Jian Yang and many other product developers: get very good at a small thing first.

“It’s very uncanny how quickly your own cultural biases can lead you astray.”

A few users did express disappointment at the lack of representation, taking to the App Store to write actual reviews like this one: “Works great so far but doesn’t accurately identify all hotdogs. The developer needs to check his own biases regarding other hotdogs.”

A skim through these reviews reveals their facetiousness. The app has not incited any real cultural incidents, according to Tim. He’s careful not to equate the pitfalls of a party gag app with the far more nuanced complications of AI. But he quickly acknowledges its reflection of unconscious bias, and encourages caution when it comes to products that could create serious problems. 

“You don’t want your app to be a reflection of the built-in ethical problem that this technology carries with it. I’m not worried about the dangers that Elon Musk likes to talk about–of AI being a civilizational challenge. But I do worry about the everyday biases that are going to be built into new products, such as using AI for crime detection and the racial implications of that.”

Take your pick from many overt instances of bias in artificial intelligence. There’s Microsoft’s “Tay” Twitter chatbot that spewed racism, misogyny, and its admiration for Hitler within 24 hours of its release. Terrible, and that’s just an Internet microcosm. Risk assessment algorithms are a more devastating example. Relying on data pulled from questionnaires and criminal records, the algorithms are used to predict a defendant’s likelihood to commit a future crime. The results are given to judges during sentencings. This nationally-used software falsely flags black defendants as future criminals at nearly double the rate as white defendants.

These instances uncover a raw truth, but it’s not about AI. It’s about us. People cannot abdicate responsibility for the technology they build once they ship it. As Tim says, “We must remember that AI will not make better decisions than humans. This technology is only as good as the data it’s fed, so everyone must be vigilant about eradicating biases. Bias really sneaks up on you, even on the simplest of use cases. I’m always trying to make people aware of that, even when it isn’t not a major ethical or legal dilemma.”

If we transmit biased information, what can we expect from our technology? And if cultural biases come up in a hotdog app, is there any hope for general AI?

Tim’s optimistic. Why? Because even if humans are bad at recognizing biases, they’re great at something else: complaining.

“Allow for more mistakes.”

User complaints sustain value for any product. They’re especially important to neural networks. On his wish-list for the app, Tim wants to include a feedback mechanism for when Not Hotdog thinks your pool-side legs are meat. Flagging mistakes would actively improve the neural network since they learn from the provided data.

“If developers built a UX that would allow for more mistakes and more room for error–and more room for levity–then I think they’d be able to deliver better user satisfaction.”

That’s not to suggest that anyone should tolerate Tay driving itself further into depravity. But rather than simply pulling the plug, developers should build effective mechanisms to communicate with a machine when it acts unexpectedly. The feedback feature for the Not Hotdog app reflects Tim’s belief that users should be able to collaborate with AI more instructively and that developers should be able to manage failure mode when the AI isn’t doing what they want it to.”  

As technology progresses, use cases for AI will also expand. Managing its influence—and its dangers—becomes increasingly critical as artificial intelligence integrates deeper into our lives.

“Use AI in a sensible way.”

Tim, Musk, and Zuck find common ground here: Technology, while objectively neutral, remains easily advantaged for good or bad.

AI reflects the values of its creators. If we want to de-bug biases, we must examine the intentions of those developing the technology. Luckily, sci-fi thrillers paint a much more insidious picture of the artificial intelligence community than the reality.

Based on his experience, Tim characterizes the AI community as an incredibly open environment. “I think the people at the core of it are good-natured and just want to share with others.” The AI giants unite behind this dream too. Facebook has released its bot sources for API.ai integration, hoping to champion open-source projects. Musk and Sam Altman have founded Open.AI, a non-profit AI research company for publishing its findings as well as open-source software tools.

This openness was crucial to Not Hotdog’s development: “From research papers to open source projects and blog posts on neural networks, tons of people in the community have been really helpful to a newcomer like me.”

It’s a great feeling, Tim says, and it’s what propelled him to document his experience in the hopes that it will inspire other developers, teams, and companies to build their own apps. He wrote a very practical account of his steps which The Signal recently republished. Thanks to his blog post, a dozen or so developers have mentioned different projects tweaking Not Hotdog, such as apps that detect calories in food or recognize poisonous mushrooms. 

“Build stupid things.”

Interested in learning how to run a neural architecture on your phone? Want to explore image-classifier problems? Why not build an app?

Sure, the use case is narrow and absurd, but the Not Hotdog app provides a valuable example of what can be accomplished with artificial intelligence today—even without funding. It’s not just for giants anymore!

So if Musk still fears AI, he has obviously never experienced the delight of Not Hotdog correctly identifying street meat off a Mission Street food cart at 3 am.

The app would get a practiced laugh out of Zuck, who would then likely spin it into a Facebook Messenger chatbot that suggests nearby hotdog stands when your friend DMs that they’re hungry.

Tim proves that delving into an AI project for the sake of learning and sharing techniques can generate impressive and positive outcomes. Think carefully about the ethical implications, but don’t let alarmism scare you from creating something. Tinkering can translate into a cool product that doesn’t have to threaten the human existence. 

So if Tim could do anything with AI, what would he build?

“Probably a better hotdog recognizer. I have always encouraged people to build stupid things,” he says. “If people didn’t attempt crazy project that they felt weren’t possible, or that people told them were impossible, we probably wouldn’t be getting a lot of breakthroughs.”

And while we’re working on those breakthroughs, we’ve got hotdogs… and Not Hotdogs.

Get the latest from Mixpanel
This field is required.