我作为技术学习股票的路径
Author: Jonathan Rotner<\/strong><\/p>\n A couple years ago, I and a small team of researchers published a website<\/a> and a paper called, \u201cAI Fails and How We Can Learn from Them.\u201d There are a couple stories that really stayed with me:<\/p>\n As an electrical engineer, my first reaction is that these are engineering problems that can be analyzed and fixed.<\/p>\n However, they\u2019re actually issues of Equity.<\/p>\n The \u2018people you may know\u2019 example reminded me that there are always unintended consequences that can result in real harm. The healthcare AI emphasized to me the danger of relying on existing data that purports to be objective yet stems from a history of systemic injustice, as well as the difficulty of trying to help a variety of people who have very different needs.<\/p>\n There was also a third story that stayed with me, that left me with a very different takeaway.<\/p>\n This example showed me that when a technology is specifically designed to help those in need, and it\u2019s driven by the input and goals of those in need, the outcomes can be equitable and extraordinary.<\/p>\n From Equity experts, I learned<\/a> that \u201cEquity\u201d is the elimination of disproportion and disparity. Equity occurs<\/a> when outcomes are not predictable based on the characteristics of an individual or group. Equity isn\u2019t the same as \u201cEquality\u201d – Equality assumes we all have the same needs and goals; Equity directs us to understand the specific obstacles that each of us face and leverage the specific resources each of possess. Equity also isn\u2019t the same as \u201cFairness\u201d \u2013 Fairness is an ideal, a philosophy that guides us towards pursuing good outcomes; Equity doesn\u2019t care which philosophy is followed, instead it emphasizes that goodness must be measured by impact<\/em>, not intent.<\/p>\n For me, as an engineer who majored in electrical engineering, got a Master\u2019s degree in circuit design, worked on 3D-printing prototyping and artificial intelligence research projects for the national defense sector, none of my experiences prepared me or trained me to consider Equity, nor did they equip me with tools to design for Equity. I needed to practice designing for equity to understand it, let alone get better at it.<\/p>\n More and more issues of Equity, like the ones above, are coming to light in advanced technology. They\u2019re not simple engineering problems, but examples of hidden biases and limited perspectives in data and algorithm creation. Just as engineers and developers learn fundamental algorithms for efficient use of computer resources, study theories of computability and compiler design, and practice by developing programs in different computer languages, so do we have to start training on Equity. That means learning how to view technologies as part of a complex ecosystem that interacts with and influences human behavior, decision making, preferences, strategies, and ways of life in beneficial, and sometimes less beneficial, ways. Once we understand how Equity is important, we will incorporate it into technological design.<\/p>\n That\u2019s why I\u2019m working to develop an \u201cEquity for Techies\u201d workbook.<\/p>\n —<\/p>\n I created an \u201cEquity for Techies\u201d workbook because I wanted to make Equity seem less foreign to my fellow Techies. I pulled from established design techniques and some remarkable authors 2<\/a><\/sup> to inform this workbook. The goal of the workbook is to help Techie teams define and redefine success for the project by linking success criteria to equitable outcomes.<\/p>\n It starts with a way to understand Equity, and a way to understand how to work towards equitable outcomes.<\/p>\n Part 1 introduces patterns of technological harm, called technology risk zones<\/em>, 3<\/a><\/sup> which include:<\/p>\n This section asks the reader to assess which zones are most applicable to their project and what groups of people might be at greater risk. The team can walk away with a sense of which populations might be at particular risk from the results of your effort and a greater appreciation of outcomes to avoid.<\/p>\n Part 2 flips the script and asks the team to imagine alternatives that place Equity at the core of the effort. For every technology risk zone there is an equitable opportunity:<\/p>\n This section asks the reader to explore equitable opportunities specifically for the groups at particular risk, as identified in the previous section. The team can walk away with a prioritized list of group(s) who can benefit from your efforts, as well as ideas of the types of services or resources that might benefit members of those groups.<\/p>\n Part 3 establishes guideposts to help the team revisit success criteria. The team is invited through prompts, which draw on principles of equity and on previous exercises, to create new success criteria. The prompts are captured below.<\/p>","tablet":""}},"slug":"et_pb_text"}" data-et-multi-view-load-tablet-hidden="true">
图片来源:盖蒂图片社 作者:乔纳森Rotner 几年前,我和一个小的研究团队发表了网站一篇论文称,“AI失败以及我们如何从错误中学习。“有几个故事,真的一直陪伴着我。 作为一个电气工程师,我的第一反应是,这些工程问题可以分析和固定。 然而,他们实际上股票的问题。 “你可能认识的人”的例子提醒我,总会有意想不到的后果,会导致真正的伤害。医疗AI对我强调依靠现有数据的危险还声称是目标源于历史的系统性的不公正,以及试图帮助各种的困难的人有不同的需求。 还有第三个故事,一直陪伴着我,给我留下一个非常不同的结论。 这个例子告诉我,当一个技术是专门设计来帮助有需要的人,这是由输入和目标的需要,可以公平和非凡的结果。 从股票的专家,我学会了“股票”是不相称的消除和差距。股本发生当结果无法预测是基于个体或群体的特点。股票不是一样的“平等”——平等认为我们都有同样的需求和目标;股本指导我们理解我们每个人面临的具体困难和利用每一个拥有特定资源。股票还不是一样的“公平”,公平是一种理想,一种哲学,指导我们对追求好的结果;股本不在乎的哲学,它强调善必须衡量影响,而不是目的。 对我来说,作为一个工程师的专业是电气工程,有一个电路设计硕士学位,在3 d打印原型和人工智能研究项目的国防部门,我的经历没有准备或训练我考虑股权,他们也没有为我配备的工具来设计为股权。我需要练习设计股权来理解它,更别说在这方面做得更好。 股本的越来越多的问题,就像上面的,在先进技术。他们不是简单的工程问题,但隐藏的偏见和有限视角的例子创建数据和算法。就像工程师和开发人员学习基本算法有效地利用计算机资源,研究可计算性理论和编译器设计、和实践通过开发项目在不同的计算机语言,所以我们要开始培训权益。这意味着学习如何查看技术作为一个复杂的生态系统的一部分,与影响人类行为,决策、偏好、策略,在有益的生活方式,有时少有益的,方式。一旦我们了解股票是很重要的,我们将把它融入工艺设计。 这就是为什么我努力开发一个“技术股权”工作簿。 - - - - - - 我创建了一个“技术股权”工作簿,因为我想让我的股票似乎不那么外国技术人员。我从既定的设计技术和一些了不起的作家2告知该工作簿。工作簿的目标是帮助技术团队为项目定义和重新定义成功成功标准与公平的结果。 它始于一种理解股权,来了解如何努力公平的结果。 第1部分介绍了模式的技术伤害,技术风险区域,3其中包括: 本节要求读者评估哪些地区最适用于他们的项目和群体可能面临更大的风险。团队可以走开的人口可能会在特定的风险来自你的努力的结果和成果,以避免更大的升值。 第2部分翻转脚本要求团队想象选择那个地方权益的核心工作。对于每一个技术风险区域有一个公平的机会: 本节要求读者探索公平机会专门为团体在特定的风险,确定在前一节中。团队可以带走的优先列表组(s)谁能受益于你的努力,以及思想的类型的服务或资源可能受益这些组织的成员。 第三部分建立了路标,帮助团队重温成功标准。团队邀请通过提示,利用股票和以前练习的原则,创建新的成功的标准。下面的提示捕获。\n
\n
\n
\n
最后锻炼提示行动。它提供了许多例子的步骤,帮助团队按照他们的想法。许多例子指出,其他的伟大的工具,横切的社会公正平台和创新工具箱项目放在一起,包括工具股票镜头纳入业务创新;一个框架评估股票的联邦项目和政策;和股票指数、模型和指标库。完成本节之后,团队可以带走一个可行的计划,指定谁正在做的什么通过当为了保证成功,公平的结果。
- - - - - -
改变我们的习惯和模式是很难的,不管话题。改变我们的思维和行动在股本真的困难的。这需要练习。人类通过重复学习。我再说一遍,人类通过重复学习。
我想更好地理解股票和将其原则融入到我的工作。现在,权益为技术人员的工作簿在草稿形式,和我在找团队(斜方的内部和外部),想测试一下他们的项目,所以我们可以继续使它更清晰、更有用。请联系如果你想参与,了解更多!
与此同时,我们不断学习,不断尝试,在与他人保持检查我们做的。参与这个话题不是快速和它并不容易。但更重要的是,因为我们技术人员工作影响他人,我们是否意识到与否,是否我们打算。一起工作在这将帮助我们练习和获得更好的。分享你的成功,分享你的牵绊,欢迎分享你的故事,你总是向我伸出援手jrotner@mitre.org。
乔纳森Rotner是一个以人为中心的技术专家,他可以帮助项目经理,算法开发人员和运营商欣赏技术对人类行为的影响。他工作增加沟通和信任一个自动化的过程。
特别感谢霍华德Gershen为他在这一块更好的工作。
引用
[1]美国穆雷风”,有偏见的算法更容易修复有偏见的人,”纽约时报2019年12月6日。(在线)。可用:https://www.nytimes.com/2019/12/06/business/algorithm-bias-fix.html?searchResultPosition=1
b .权力,z .欧博迈亚c . Vogeli和s·穆雷风,“解剖种族偏见的算法用于管理人口的健康,”科学,366卷,没有。6464年,页447 - 453,2019年10月25日。(在线)。可用:https://science.sciencemag.org/content/366/6464/447
- 克里斯汀·玛丽·奥尔蒂斯Guzman博士股权符合设计,
- 的道德的探险家指导和道德操作系统伙伴关系研究所未来的,一个智囊团,科技和社会解决方案实验室前主动投资公司Omidyar网络的影响。可以在https://ethicalexplorer.org/和https://ethicalos.org/
- 创意反应实验室,特别是他们的系列讲座。https://www.creativereactionlab.com/
- 斜接的创新工具。可以在:https://itk.mitre.org/
- 斜接的评估股票的联邦项目和政策框架。可以在://www.rongyidianshang.com/publications/technical-papers/a-framework-for-assessing-equity-in-federal-programs-and-policy
我非常感激建造了他们的工作。
©2022斜方manbetx客户端首页公司。保留所有权利。批准的公开发布。无限的分布。箱号22 - 1400
横切为导向的团队致力于解决问题的一个更安全的世界。了解更多关于斜方。
参见:
维护我们的智能港口:教训与Tamara Ambrosio-Hemphill吉布提

