How AI can help us build the future we want - Mixpanel
How AI can help us build the future we want
Product Foundations

How AI can help us build the future we want

Last edited: Mar 1, 2022 Published: Mar 29, 2018
Mixpanel Team

If private companies are any indicator, some billionaires think AI is so dangerous that they are trying to escape to Mars. But there are other perspectives.

Beneath all the doomsaying and dystopian science fiction, many of the smartest AI algorithms have rather banal jobs. They’re trying to convince consumers to click ads, entice people to spend more time on social media, or are recommending new running sneakers.

What exactly do we mean when we say AI? Artificial intelligence—often mentioned but little understood—is a blanket term for several types of algorithms that learn and perform tasks, sometimes without being explicitly programmed to do so. They think, reason, and understand the world via text, audio, visuals, and emoji.

AI algorithms are rarely able to perform more than one task. Most are narrowly applied to singular, repetitive, computation-heavy chores like identifying photos and scraping the web. They are much less versatile than the walking, talking, sentient robots depicted in the 2001 movie A.I.

Hila Mehr, former fellow at the Harvard Ash Center for Democratic Governance and Innovation, believes AI algorithms can be used for social good. Hila spent several years studying the impact AI might have on improving citizens’ access to their governments. In a research paper, she outlined the clear opportunities for how AI can help the public sector serve citizens.

In the private sector, the AI startup Cultivate uses algorithms to help employers reduce bias in the workplace. Co-founders Joe Freed and Samir Meghani believe technologies like theirs are key for companies that want to attract and retain more diverse teams.

The Signal sat down with Hila, Joe, and Samir to discuss.

AI could help the government get its act together

Because AI algorithms are both powerful and tireless, Hila believes they can help public institutions increase public access, such as routing requests and translating documents.

In the aforementioned research paper, Hila, who spent several years as a fellow at Harvard researching AI, lays out several areas where algorithms can help public institutions increase public access, such as routing requests and translating documents.

Conversational interfaces like chatbots could help governments answer citizens’ questions. Fewer people would need to wait on hold with the IRS, for example, and AI would unleash the IRS’s experts to help those most in need. Algorithms could also crunch data for the Congressional Budget Office or file paperwork for people who apply for Social Security benefits. Already, robots with computer vision sort mail, and have since the 1990s, which saves the U.S. Postal Service hundreds of millions of dollars each year.

AI has the potential to improve U.S. citizens’ trust in political institutions at a time when their faith in government is at an all-time low. Only nine percent of Americans say they trust Congress. Millions are considered underserved, partly because 75 percent of the $90 billion of federal technology spending goes toward maintaining inefficient legacy systems that are begging to be replaced. These are prime opportunities for AI to increase access.

Yet things could go wrong. If algorithms are applied without considering the root causes of systemic government inefficiency, they might automate inequity. The cities of Los Angeles, Pittsburgh, and Indianapolis have applied algorithms to doling out social care and benefits. Some reversed their positions after algorithms wreaked havoc upon underserved communities like the poor and the homeless.

These errors are particularly problematic when applied to the criminal justice system. When risk-assessment algorithms and predictive policing systems reflect existing biases, they can make biased decisions to amplify racial profiling and send innocent people to jail.

But according to Virginia Eubanks, author of Automating Inequality, algorithms merely reflect society’s judgements. Systematic AI errors can almost always be traced back to an incorrect cultural assumption, such as the idea that the poor are undeserving of assistance. Virginia claims AI is frequently used to “avoid having tough conversations” such as when Los Angeles allowed an algorithm to restrict homeless people’s access to shelters rather than tackle the underlying societal factors that lead to homelessness.

AI must be implemented thoughtfully and strategically, says Hila. “Humans are naturally biased and humans bias AI unintentionally. Anyone applying AI needs to be thinking about mitigating ethical risks.”

Yet all the doomsday predictions may be overwrought: The field of design thinking has long offered solutions to the type of problems AI creators are up against. More and more technology companies are recognizing that diverse teams can create more helpful user experiences for more people. “Most product teams build the product they want for themselves,” former Uber product leader Mina Radhakrishnan told The Signal. “But the more identities, backgrounds, and experiences represented by creators, the more problems solved, the more user perspectives understood, and the more products launched by teams who have a handle on how the world will receive them.” Simply, having a diverse team reduces negative biases and helps companies and institutions create more equitable AI algorithms.

 

Designers who apply design thinking can help the government avoid ill-fitting, top-down mandates by looping citizens into the conversation. They can act as citizens’ advocates and citizens can become designers for their own communities.

“I would always want someone who is from a given community designing for that community,” says Hila. “They’ll have existing relationships and a fundamentally deeper understanding of how best to solve the problem. For example, people in the field of inclusive design work to include people typically left out of discussions, like Black and Latinx communities, and challenge them to solve issues they’re facing. It’s not designing for people, it’s letting people design for themselves.”

Ethical inclusion, then, is at the heart of AI systems for good. Teams deploying AI for government services are thinking deeply about the implications of AI applications, says Mehr. Private companies are also heeding the call.

How startups use AI to improve diversity and inclusion in the workplace

If humans unintentionally bias AI, a handful of private companies are realizing that AI can help dig them out of that hole. The AI startup Cultivate applies natural language processing and machine learning algorithms to workplace conversations to help remedy the unconscious biases that lead to product design mishaps and monocultures.

 

“Unconscious biases are tricky because you can’t really get feedback on them anywhere,” says Cultivate CEO and Co-Founder Joe Freed. “Instead, workplace feedback is usually couched in platitudes like, ‘at least you’re trying,’ or else not delivered at all. Many of our customers have started to build diverse teams but then struggle with ways to retain people from underrepresented groups. And we know that people don’t leave companies, they leave managers. So what can we do to make those managers better?”

Cultivate AI analyzes companies’ email communication and gives managers private feedback on their biases. It’s careful to never pass judgement. “If AI tells people what to do, it creates an uneasiness factor that the entire industry is trying to avoid,” says Cultivate co-founder Samir Meghani. “That’s why Cultivate simply provides real-time suggestions like, ‘Maybe don’t email Jordan at 2 am—she’s asleep.’” It also lets managers know if they’re more friendly toward certain reports, or use condescending language toward others.

If algorithms can help teams reduce their biases, it can help them build more successful products. A study from Harvard shows that workplaces with a diverse staff are more innovative. Another from MIT suggests that diverse companies earn greater profit. “The key takeaway is that a variety of perspectives lead to better work,” wrote Ruth Reader in a Fast Company article.

Cultivate is not alone in its anti-bias mission. The startup Perspective uses AI algorithms to try and defeat online harassment. Startups like Ideal, Textio, and Blendoor use AI to reduce unconscious biases in hiring. All of them imagine a future where humankind deploys AI to counterbalance the effects of other algorithms that carry negative biases. These startups also share the belief that AI should augment people rather than replace them.

In the public and private workplaces of the future, AI’s greatest application for good will be to make people more effective at their jobs. Science fiction that depicts AI robots conspiring against humanity makes the false assumption that robots will be granted this power. But what Cultivate and other AI startups find is that the best decisions still come from humans. People who are given AI-powered recommendations on topics their brains weren’t designed to process—such as eliminating bias—ultimately make more informed decisions than either humans or machines alone.

“You don’t want AI to be the ultimate decision maker,” says Hila. “You want it to inform human users who have ultimate oversight. Early pilots of AI show that it’s more successful when it’s augmenting human tasks. That’s especially true in the public sector where AI can have a huge, life-changing impact such as in the justice system.”

Things to consider when implementing AI

Even in a “move fast and break things” culture, we sometimes have to move slower and break less, or else risk building products that don’t align with our intentions or values. Here’s what Hila, Joe, and Samir recommended to any company deploying AI:

Consider the ethics

Algorithmic decision-making is never neutral, according to AlgorithmWatch, a Berlin-based watchdog. If technology companies don’t consider the ethical implications of their products, they’ll create platforms like Facebook that, for all its good intentions, was abused by Russian hackers.

Employing ethicists helps, as does making sure that end-users or citizens have input. Creators must tread carefully with users’ privacy and trust because users’ buy-in and involvement are crucial in avoiding costly mistakes.

Disclose your AI

Today, it’s common for users to ask chatbots whether they are talking to a real person. Hila  advises companies to disclose when they are using AI “anytime it might appear misleading.”

 

Human resources teams and recruiters, for example, who have the potential to affect people’s careers and income, should announce their algorithmic decision-making. “We’re going to start seeing a lot more disclosures,” explains Hila. “If companies are using AI for diversity and inclusion and value transparency, I think AI disclosures will become standard practice.”

Keep data accurate, private, and secure

Product teams need to ask themselves what data they are collecting and what data they have consent to use. “Data that’s inaccurate our outdated can impact algorithms and the outputs of AI. Creators need to think carefully not only about the data shelf life and security, but also what data they have users’ consent to track in the first place,” says Hila.

The passage of the data protection act GDPR in Europe is a first step in a larger trend to greater data transparency. Consumers increasingly want control over the data companies keep on them. Teams deploying AI should get ahead of this movement and take special precaution to keep users’ data accurate, private, and secure.

Make AI explainable

When lives and livelihoods are on the line, such as when algorithms are used to allocate social services and benefits, AI should augment human decisions, not replace them. For that to be possible, humans must know how algorithms come to their conclusions. But most machine learning algorithms function like black boxes.

AI teams should join the movement toward making AI explainable. In the future, it may be a legal necessity. The MIT Review reports that the U.S. Supreme Court is currently hearing a case where a Wisconsin convict is suing the courts for deciding his sentence based on an opaque risk assessment algorithm. When applying AI, product teams must ensure the algorithm can justify its decisions to human operators.

Work at the local level

AI will have the greatest impact for social good where public and private institutions collaborate. “Going back to the founding of our country, there’s a long history of private and public sector partnership,” says Hila. Cities can create hotbeds for tech entrepreneurship to help startups apply their technology for public good, such as what Boston has done with its Seaport neighborhood. In the Midwest, cities like Kansas City are attracting startup communities. Startups can take cities up on their offers and invest in solving big, long-term problems without immediate payoffs.

“The biggest opportunity for AI is in some of the more unsexy but very important tasks that make government run,” says Hila. “But I think that exciting opportunities are probably at the local level. There’s a lot of innovation at the local level and a lot of great civic tech.” It will be up to local governments and venture capital firms to give these projects life when they aren’t big moneymakers.

AI together now

Time may eventually prove AI-fearing billionaires right, but we have to remember that any biases AI exhibits are our own. AI is but a mirror and in this, there is hope. If designers and product teams—both public and private—are working hard to overcome their own biases through diversity, inclusion, and design thinking, humanity may eventually look into the mirror to find an AI that’s prepared to do social good staring back at it. But, this can’t happen unless humanity is willing to invest effort.

“Innovation is great, but the right innovations aren’t always the sexy, easy thing to do,” says Hila. “That goes for creating unbiased AI. But for our collective future, we must. We should. And, I think we can.”

Get the latest from Mixpanel
This field is required.