Tell us a little about yourself.
My name is Frank. Right now, I’m a partner at Andreessen Horowitz, which is a Silicon Valley–based venture capital firm that invests in tech startups. Companies that we’ve invested in whose products and services you might use regularly include Instacart, Lyft, Pinterest, Airbnb, Box, Affirm, Groupon, Instagram, Facebook, and Twitter. We’ve also invested in companies serving military and public safety customers, including Anduril, SkySafe, Shield.ai, and Flock Safety.
My day job is helping early-stage founders build their companies. So, imagine you have an idea, and you’ve just raised what’s called a “seed round” (typically a few million dollars) to build the product and start selling it to customers. At this point, you’re doing the hardest thing startup founders do, which is to build something from nothing—at this moment in time, you don’t have a team, you don’t have a product, no one has heard of you, and you have no way to get the word out. I’m privileged to lead a team that helps the founders do all the things they have to do to build a great company: hire their first engineer, do their first product launch, sell to their first customer, and so on.
How should law enforcement executives view emerging technology and especially artificial intelligence?
We believe deeply that technology can help make everyone’s lives and jobs better, more efficient, and more enjoyable. For example, it’s hard to even list all the ways your daily life is better with your smartphone. The iPhone is only 16 years old—just a teenager, really.
For the law enforcement community, technology can help make all the things police departments need to do better, more effective, and more efficient. Let’s walk through each of the main goals of a police department and look at near-term technologies that will make that goal easier to achieve:
- Prevent crime. Analyzing data about what type of crimes occur in what regions helps police departments figure out the best way to deter crime. This type of statistical, location-based data analysis used to be accessible only for big companies with enormous IT budgets, but products such as Google Maps, PredPol, Databricks, and Sisu Data have made sophisticated data analysis and visualization available to organizations of all sizes.
- Respond to crime. Technology can help officers get to the scene of a crime with more awareness by tapping into cameras installed in fixed locations or even community members’ smartphones. Once there, officers can use technology to gather and analyze all kinds of data: eyewitness accounts, video surveillance, DNA, fingerprints, and other forensic data.
- Maintain public order and safety. Many departments are already using technology such as cameras, microphones, and social media monitoring to monitor for threats or violations to local ordinances. Increasingly, AI can automatically analyze the output from those systems (video, audio, and text) to identify ordinance violations or emerging threats. In particular, video analytics software can figure out if there is an active threat in scenes captured by video cameras.
- Build relationships with the community. Police officers can use systems to communicate with the people they protect. Communication should be two-way, both talking and listening. Artificial intelligence can help officers write messages and reports and summarize information from both written police reports and public reports. Writing tools can help fine tune messages for specific audiences, making sure the writing is appropriate for a specific educational, gender, or ethnic background.
- Hire, train, and motivate the officers who serve. Police departments can use AI-powered recruiting, interview, and assessment tools to identify top talent. Once hired, departments can use technology to help onboard, train, and assess officers. Increasingly, AI makes it possible to provide 1:1 tailored coaching in addition to 1: many classroom-type training, helping each officer work on their unique development areas.
What factors should chiefs consider before taking a chance on new technology? What should executives look for in new technology?
- Know what success looks like: What problem are you trying to solve? How will you measure success? How long do you expect it to take? The clearer you are on the goals and the ways you’ll measure success, the higher the chance you’ll succeed.
- Understand where you are on the risk-taking versus risk-avoiding spectrum and match it up with the products and services you are considering. If you’re risk-averse, wait until other police departments that you know well have tried it out. Ask for references.
- Success = people + process + technology. It’s not just about buying the latest body-worn camera or data analysis system. Getting it to work requires you to think through how your people will use it and what processes you have to change to get the most out of the new technology.
What do Silicon Valley investors look for in new technology?
- Big market: Can a company sell hundreds of millions of dollars a year or more of their product?
- Fast growing: Can they double or triple (or more) revenue year over year in the early years?
- Attractive, defensible business models: Is it profitable? Can they defend themselves from the competition?
- Ambitious founders with amazing grit and resilience: The road from startup to household name often takes many turns, many of them unanticipated. Successful founders stick with it even when the going gets tough.
What is artificial intelligence?
Artificial intelligence (AI) is a branch of computer science that deals with the creation of intelligent agents, which are systems that can see, hear, reason, learn, and act autonomously. Most of us use AI every day when we talk with our smart speakers; see a list of recommended videos to watch next; or, more recently, generate art and interact with ChatGPT.
The history of AI can be traced back to the 1950s (when the only computers available were mainframes with far less computing power than the earliest iPhone). In 1950, Alan Turing proposed his famous test to measure the intelligence of a machine, which we now call the Turing test. Imagine you are texting someone you can’t see. If you can’t tell whether you are texting a person or a computer, you’d have to conclude that the computer was intelligent. A few years later, John McCarthy coined the term “artificial intelligence” when he organized the Dartmouth Summer Research Project on Artificial Intelligence in 1956, which is considered to be the founding event of AI research. Since then, there have been several periods of excitement and funding—but no big breakthroughs.
In the mid-2010s, AI research began to make a comeback, with both lots of data (think: YouTube videos, all the text on the Internet) and computing power (mostly concentrated in the cloud at places like Amazon, Google, and Microsoft). Applying data and computer power using some old techniques such as “neural networks” resulted in AI that could be useful in everyday life. This burst of research, venture funding, and ingenuity fueled a wave of AI technology that led to products many of us use every day, such as Alexa’s speech recognition and question answering, TikTok’s selection of what video to watch next, and our banks identifying fraudulent credit card transactions.
More recently, a sub-branch of AI called “generative AI” has become very popular, and we expect it to be embedded into most categories of software. These generative AI products, such as ChatGPT or Midjourney, generate (or create) articles, essays, music, voice recordings, artwork, photographs, and even videos that are indistinguishable from similar products created by humans.
These techniques are so powerful that a computer-generated photo recently won a Sony photography contest and the editor of the German magazine Die Aktuelle was fired after it was revealed that an interview she published with the Formula 1 racer Michael Schumacher was generated by AI rather than based on an interview of Schumacher. A generative AI product from the company OpenAI called ChatGPT became the fastest product ever to get to 100 million monthly active users just two months after it launched. (For comparison, it took Netflix 10 years, Facebook 4.5 years, and Instagram 2.5 years to hit that same milestone.)
How can police departments use AI to help keep their communities safe?
AI technology in the form of image recognition is already being used to help solve crimes. For example, more than 3,000 police departments around the world are already using image recognition technology to automatically recognize license plates, car makes and models, and even whether a vehicle has had any modifications such as after-market wheels or a ski rack installed. Officers have used this technology to catch the mass murder suspect in Cobb County in May 2023, break up a catalytic converter theft ring, and rescue more than 80 human trafficking victims.
Sharing images between different police departments as suspects move between multiple jurisdictions can help departments work together to solve crimes.
As with body-worn cameras, departments need to be thoughtful about the policies around how they use the technology. How long are images saved? Who can look up a car’s location? How will the public learn about the technology’s use? But once you set up thoughtful policies, using the technology helps both lower crime by increasing the likelihood and the speed of apprehending criminals.
Other forms of AI are also in widespread use in communities around the world, including predictive analytics that help determine where crime is likely to take place, social media monitoring to monitor flash points in communities, and DNA analysis to establish whether particular people were in contact with specific pieces of evidence or at a particular place.
Say more about the latest wave of generative AI. What is it, and how is it different from a search engine?
Generative AI is a type of artificial intelligence that can create new content, such as text, images, music, or even video. It does this by learning from existing data and then using that data to generate new examples. Examples of generative AI products include
- Chatbots such as OpenAI’s ChatGPT, Microsoft Bing, and Google Bard
- Creator tools such as Midjourney that help you create art and photorealistic images; Character.ai which helps you build conversational agents based on real people (such as Stephen Hawking, Lady Gaga or Keanu Reeves), historical figures (such as Rosa Parks or Winston Churchill), and fictional characters and universes (Mario or Luigi from Super Mario Brothers, Wednesday Addams from the Addams Family); and Eleven Lab’s audio generation tool that creates natural sounding speech.
The main difference between a search engine and a chatbot is that a search engine is designed to find information, while a chatbot is designed to have a conversation, answer questions, and solve problems.
A search engine is a computer program that is designed to find information on the internet. It does this by visiting all the pages on the internet and figuring out what each page is about. When you search for something, the search engine will return a list of websites that it believes are relevant to your search query.
A chatbot is a computer program that is designed to simulate conversation with human users. It does this by using natural language processing to understand what the user is saying and then generating a response that is both relevant and informative. For example, you can ask the chatbot questions, and it will try to answer your question. Or you can give it a request (called a “prompt”) such as “write an article” or “write a melody” or “write a computer program,” and it will write an appropriate response. For example, here is a list of 215 prompts for a wide variety of writing tasks, ranging across computer programming, cooking, making art, and travel.
Could this technology be used to help officers do their job and even solve crimes?
Officers can start today using chatbots anywhere where they are working with documents such as incident reports or presentations to city offices. AI-powered tools can help you communicate more effectively by writing drafts of your document; creating images, music, or videos to go with your text; or helping to better tailor a presentation to a specific audience (for example, a user can ask a chatbot to “help me explain these ideas to elementary school students”).
An emerging trend is to feed these generative AI systems specific documents and then enable users to ask questions about those books. You can see an example of this type of technology as Google’s Talk To Books site. You can ask a question such as “When was a convicted killer later exonerated based on new evidence?” and the site will “ask” books in its database for answers.
Imagine a system where an AI “ingested” all of the data associated with criminal cases: police reports, depositions, pictures and video evidence, DNA evidence, and so forth—and then detectives and defense attorneys could “ask” the system questions about the data. Technology like this could turn AI into one of the best crime-fighting sidekicks, helping to increase the solve rate and lower the resources needed to apprehend and prosecute criminals.
Early versions of this technology are already available to help attorneys prepare for a trial by sifting through, categorizing, and analyzing text. A new generation of this “question answering” technology is being written as we speak for every profession from investors to marketers, from lawyers to elected officials. Everyday citizens can even use these AI powers to fight for their own rights: you can use a service called Do Not Pay to help you cancel your subscription, get airline flight compensation, or appeal parking tickets.
Everything seems to be “in the cloud.” Why?
It might surprise some readers to know that most spending on computing still takes place “on premise,” which means people and organizations purchasing and using their own laptops, servers, and storage equipment to run software. But Gartner, the biggest analyst of spending by organizations, predicts that spending on cloud computing will surpass on-premise spending by 2025.
This long-term trend toward the cloud is happening for a few reasons:
- It’s easier to scale: Cloud computing is scalable, which means that you can easily add or remove resources as needed. If you have an application that turns out to be super popular, you don’t have to worry about buying enough servers, networking gear, and storage equipment to keep up with demand. Some of the latest generative AI systems depend on a lot of interconnected computers, which are too expensive for most police departments (or even large universities) to own and operate.
- It’s easier to use: Cloud computing is easy to use, even if you don’t have a lot of technical expertise. Leaders and even individual users can just sign up for a new product without waiting for the department’s own IT resources to evaluate, deploy, and test a new system in their own facilities.
- Security: Cloud providers typically have strong security measures in place to protect data. This can be a major benefit if you are concerned about security.
Prediction time—How will AI change policing?
As Yoga Berra said, “’It’s tough to make predictions, especially about the future.” Having said that, I believe that, in a few years, every police officer will have a personalized AI coach they interact with by talking. That AI coach will help them regardless of what specific job they have in the department.
- For patrol officers, AI systems embedded in smart glasses or smart contact lenses will create an augmented reality overlay over real-world scenes, identifying objects (buildings, cars), scenes (Is there a threat situation?), and historical data (crimes reported at this location in the past).
- For detectives and crime analysts, AI systems will help solve crimes by making it easier to gather, analyze, and act on evidence. Detectives will be able to ask their AI assistants questions (e.g., How persuasive would this piece of evidence be in a trial? Based on similar cases in the past, what else should I ask this informant?) and give instructions (e.g., Help me prioritize the next leads to pursue; Show all the suspects and their motives and opportunities).
- For police leadership, AI systems will help them identify the best recruits, provide tailored coaching to each officer, and be more effective managers. They’ll also be able to make smarter hiring, promotion, and demotion decisions based on the needs of the community.
- For public information officers, AI systems will help officers listen to and communicate with people in the public with knowledge and empathy. Chatbots have already been rated more useful and empathic than human doctors; similar technology will be adapted to public information officers. 🛡
Please cite as
Frank Chen, “Exploring AI for Law Enforcement: Insight from an Emerging Tech Expert,” interviewed by Chris Hsiung, Police Chief Online, September 20, 2023.