Algorithms are everywhere. Here's why you should care
Every time you pick up your smartphone, you're summoning algorithms. They're used for everything from unlocking your phone with your face to deciding what videos you see on TikTok to updating your Google Maps route to avoid a freeway accident on your way to work.
An algorithm is a set of rules or steps followed, often by a computer, to produce an outcome. And algorithms aren't just on our phones: they're used in all kinds of processes, on and offline, from helping value your home to teaching your robot vacuum to steer clear of your dog's poop. Over the years they've increasingly been entrusted with life-altering decisions, such as helping decide who to arrest, who should be released from jail before a court date, and who's approved for a home loan.
In recent weeks, there has been renewed scrutiny of algorithms, including how tech companies should shift the ways they use them. This stems both from concerns raised in hearings featuring Facebook whistleblower Frances Haugen and from bipartisan legislation introduced in the House (a companion bill had previously been reintroduced in the Senate). The legislation would force large tech companies to allow users to access a version of their platforms where what they see isn't shaped by algorithms. These developments highlight mounting awareness about the central role algorithms play in our society.
"At this point, they are responsible for making decisions about pretty much every aspect of our lives," said Chris Gilliard, a visiting research fellow at Harvard Kennedy School's Shorenstein Center on Media, Politics and Public Policy.
Yet the ways in which algorithms work, and the conclusions they reach, can be mysterious, particularly as the use of artificial intelligence techniques make them ever more complex. Their outcomes aren't always understood, or accurate — and the consequences can be disastrous. And the impact of potential new legislation to limit the influence of algorithms on our lives remains unclear.
Algorithms, explained
At its most basic, an algorithm is a series of instructions. As Sasha Luccioni, a research scientist on the ethical AI team at AI model builder Hugging Face, pointed out, it can be hard coded, with fixed directions for a computer to follow, such as to put a list of names in alphabetical order. Simple algorithms have been used for computer-based decision making for decades.
Today, algorithms help ease otherwise-complicated processes all the time, whether we know it or not. When you direct a clothing website to filter pajamas to see the most popular or least expensive options, you're using an algorithm essentially to say, "Hey, Old Navy, go through the steps to show me the cheapest jammies."
All kinds of things can be algorithms, and they're not confined to computers: A recipe, for instance, is a sort of algorithm, as is the weekday morning routine you sleepily shuffle through before leaving the house.
"We run on our own personal algorithms every day," said Jevan Hutson, a data privacy and security lawyer at Seattle-based Hintze Law who has studied AI and surveillance.
But while we can interrogate our own decisions, those made by machines have become increasingly enigmatic. That's because of the rise of a form of AI known as deep learning, which is modeled after the way neurons work in the brain and gained prominence about a decade ago.
A deep-learning algorithm might task a computer with looking at thousands of videos of cats, for instance, to learn to identify what a cat looks like. (It was a big deal when Google figured out how to do this reliably in 2012.) The result of this process of binging on data and improving over time would be, in essence, a computer-generated procedure for how the computer will identify whether there's a cat in any new pictures it sees. This is often known as a model (though it is also at times referred to as an algorithm itself).
These models can be incredibly complex. Facebook, Instagram, and Twitter use them to help personalize users' feeds based on each person's interests and prior activity. The models can also be based on mounds of data collected over many years that no human could possibly sort through. Zillow, for instance, has been using its trademarked, machine-learning assisted "Zestimate" to estimate the value of homes since 2006, taking into consideration tax and property records, homeowner-submitted details such as the addition of a bathroom, and pictures a house.
The risks of relying on algorithms
As Zillow's case shows, however, offloading decision-making to algorithmic systems can also go awry in excruciating ways, and it's not always clear why.
Zillow recently decided to shutter its home-flipping business, Zillow Offers, showing how hard it is to use AI to value real estate. In February, the company had said its "Zestimate" would represent an initial cash offer from the company to purchase the property through its house flipping business; in November, the company took a $304 million inventory writedown, which it blamed on having recently purchased homes for prices that are higher than it thinks it can sell them.
Elsewhere online, Meta, the company formerly known as Facebook, has come under scrutiny for tweaking its algorithms in a way that helped incentivize more negative content on the world's largest social network.
There have been life-changing consequences of algorithms, too, particularly in the hands of police. We know, for instance, that several Black men, at least, have been wrongfully arrested due to the use of facial-recognition systems.
There's often little more than a basic explanation from tech companies on how their algorithmic systems work and what they're used for. Beyond that, experts in technology and tech law told CNN Business that even those who build these systems don't always know why they reach their conclusions — which is a reason why they're often referred to as "black boxes."
"Computer scientists, data scientists, at this current stage they seem like wizards to a lot of people because we don't understand what it is they do," Gilliard said. "And we think they always do, and that's not always the case."
Popping filter bubbles
The United States doesn't have federal rules for how companies can or can't use algorithms in general, or those that harness AI in particular. (Some states and cities have passed their own rules, which tend to address facial-recognition software or biometrics more generally.)
But Congress is currently considering legislation dubbed the Filter Bubble Transparency Act, which, if passed, would force large Internet companies such as Google, Meta, TikTok and others to "give users the option to engage with a platform without being manipulated by algorithms driven by user-specific data".
In a recent CNN Opinion piece, Republican Sen. John Thune described the legislation he cosponsored as "a bill that would essentially create a light switch for big tech's secret algorithms — artificial intelligence (AI) that's designed to shape and manipulate users' experiences — and give consumers the choice to flip it on or off."
Facebook, for example, does already have this, though users are effectively discouraged from flipping the so-called switch permanently. A fairly well-hidden "Most Recent" button will show you posts in a reverse chronological order, but your Facebook News Feed will go back to its original, heavily moderated state once you leave the website or shut the app. Meta stopped offering such an option on Instagram, which it also owns, in 2016.
Hutson noted that while the Filter Bubble Transparency Act clearly focuses on large social platforms, it will inevitably affect others such as Spotify and Netflix that depend deeply on algorithmically-driven curation. If it passes, he said, it will "fundamentally change" the business model of companies that are built entirely around algorithmic curation — a feature he suspects many users appreciate in certain contexts.
"This is going to impact organizations far beyond those that are in the spotlight," he said.
AI experts argue the need for more transparency is crucial from companies making and using algorithms. Luccioni believes laws for algorithmic transparency are necessary before specific usages and applications of AI can be regulated.
"I see things changing, definitely, but there is a really frustrating lag between what AI is capable of and what it's legislated for," Luccioni said.
算法无处不在。这就是你应该关心的原因
每次拿起智能手机时,都是在召唤算法。它们可用于从用脸解锁手机到决定在 TikTok 上看到哪些视频,再到更新 Google 地图路线以避免上班途中在高速公路上发生事故。
算法是一组规则或步骤,通常由计算机遵循以产生结果。算法不仅仅在我们的手机上:它们用于各种在线和离线流程,从帮助评估您的房屋价值到教您的机器人吸尘器避开您的狗的便便。多年来,他们越来越多地被委托做出改变生活的决定,例如帮助决定逮捕谁、谁应该在法庭日期前出狱以及谁被批准获得住房贷款。
最近几周,人们重新审视了算法,包括科技公司应该如何改变他们使用算法的方式。这既源于 Facebook 举报人弗朗西斯·豪根 (Frances Haugen) 在听证会上提出的担忧,也源于众议院提出的两党立法(一项配套法案此前已在参议院重新提出)。该立法将迫使大型科技公司允许用户访问他们所看到的不受算法影响的平台版本。这些发展突显了人们越来越意识到算法在我们的社会中扮演的核心角色。
哈佛肯尼迪学院肖伦斯坦媒体、政治和公共政策中心的访问研究员克里斯·吉利亚德说:“在这一点上,他们负责对我们生活的几乎所有方面做出决定。”
然而,算法的工作方式及其得出的结论可能是神秘的,尤其是当人工智能技术的使用使它们变得越来越复杂时。他们的结果并不总是被理解或准确——而且后果可能是灾难性的。限制算法对我们生活影响的潜在新立法的影响尚不清楚。
算法,解释
在最基本的情况下,算法是一系列指令。正如 AI 模型构建器 Hugging Face 的道德 AI 团队的研究科学家 Sasha Luccioni 指出的那样,它可以是硬编码的,具有计算机可以遵循的固定方向,例如按字母顺序排列名称列表。几十年来,简单的算法已被用于基于计算机的决策。
今天,无论我们是否知道,算法始终有助于简化原本复杂的流程。当您指示服装网站过滤睡衣以查看最受欢迎或最便宜的选择时,您实际上是在使用一种算法说:“嘿,老海军,通过这些步骤向我展示最便宜的睡衣。”
各种各样的东西都可以是算法,而且它们不仅限于计算机:例如,食谱是一种算法,就像你在离开家之前昏昏欲睡的工作日早晨例行公事一样。
“我们每天都在运行我们自己的个人算法,”西雅图 Hintze Law 的数据隐私和安全律师 Jevan Hutson 说,他研究了人工智能和监控。
但是,虽然我们可以质疑自己的决定,但机器做出的决定却变得越来越神秘。这是因为一种被称为深度学习的人工智能形式的兴起,它模仿了大脑中神经元的工作方式,并在大约十年前获得了突出地位。
例如,深度学习算法可能会要求计算机查看数千个猫的视频,以学习识别猫的长相。 (当谷歌在 2012 年想出如何可靠地做到这一点时,这是一件大事。)这个对数据进行处理并随着时间的推移不断改进的过程的结果本质上是一个计算机生成的程序,用于计算机如何识别是否在它看到的任何新照片中都有一只猫。这通常称为模型(尽管有时也称为算法本身)。
这些模型可能非常复杂。 Facebook、Instagram 和 Twitter 使用它们来帮助根据每个人的兴趣和先前的活动个性化用户的提要。这些模型还可以基于多年来收集的大量数据,而这些数据是人类无法分类的。例如,自 2006 年以来,Zillow 一直在使用其注册商标的机器学习辅助“Zestimate”来估算房屋价值,同时考虑了税收和财产记录、房主提交的详细信息(如增加浴室)和图片房子。
依赖算法的风险
然而,正如 Zillow 的案例所示,将决策转移到算法系统也可能会以令人痛苦的方式出错,而且原因并不总是很清楚。
Zillow 最近决定关闭其房屋翻转业务 Zillow Offers,这表明使用人工智能对房地产进行估值是多么困难。今年 2 月,该公司曾表示,其“Zestimate”将代表该公司通过其房屋翻转业务购买该物业的初始现金报价;去年 11 月,该公司进行了 3.04 亿美元的库存减记,将其归咎于最近以高于其认为可以出售的价格购买的房屋。
在其他网络领域,前身为 Facebook 的 Meta 公司因调整其算法而受到审查,该公司以一种有助于激励全球最大社交网络上更多负面内容的方式受到审查。
算法也产生了改变生活的后果,尤其是在警察手中。例如,我们知道,至少有几名黑人男子因使用面部识别系统而被错误逮捕。
科技公司通常只对他们的算法系统如何工作以及它们的用途做出基本解释。除此之外,技术和技术法专家告诉 CNN Business,即使是构建这些系统的人也不总是知道他们为什么会得出结论——这就是他们经常被称为“黑匣子”的原因。
“计算机科学家、数据科学家,在现阶段,他们对很多人来说似乎是巫师,因为我们不明白他们在做什么,”吉利亚德说。 “而且我们认为他们总是这样做,但情况并非总是如此。”
弹出过滤气泡
美国没有关于公司如何可以或不能使用算法的联邦规则,或者特别是那些利用人工智能的算法。 (一些州和城市已经通过了自己的规则,这些规则往往更广泛地涉及面部识别软件或生物识别技术。)
但国会目前正在考虑一项名为“过滤气泡透明度法案”的立法,如果该法案获得通过,将迫使谷歌、Meta、TikTok 等大型互联网公司“让用户可以选择与平台互动,而不受其驱动的算法操纵。用户特定数据”。
在美国有线电视新闻网最近的一篇评论文章中,共和党参议员约翰图恩将他共同发起的立法描述为“一项法案,该法案实质上将为大型科技公司的秘密算法——旨在塑造和操纵用户体验的人工智能 (AI)——创造一个电灯开关——以及让消费者可以选择打开或关闭它。”
例如,Facebook 确实已经有了这个,尽管用户被有效地劝阻不要永久地翻转所谓的开关。一个相当隐蔽的“最近”按钮会按时间倒序向您显示帖子,但是一旦您离开网站或关闭应用程序,您的 Facebook 新闻提要将恢复到原始的、经过严格审核的状态。 2016 年,Meta 停止在其拥有的 Instagram 上提供此类选项。
Hutson 指出,虽然《过滤气泡透明度法案》显然侧重于大型社交平台,但它不可避免地会影响其他公司,例如 Spotify 和 Netflix,它们深深地依赖于算法驱动的策展。他说,如果它通过,它将“从根本上改变”完全围绕算法管理建立的公司的商业模式——他怀疑许多用户在某些情况下会欣赏这一功能。
“这将影响远远超出聚光灯下的组织,”他说。
人工智能专家认为,对于制造和使用算法的公司来说,提高透明度至关重要。 Luccioni 认为,在对 AI 的特定用途和应用进行监管之前,算法透明度的法律是必要的。
Luccioni 说:“我确实看到事情正在发生变化,但人工智能的能力和它的立法目的之间存在着令人沮丧的滞后。”