Artificial intelligence is revolutionizing work, including what it means for cooperative work to be supported by computers. The increased use of AI in CSCW can lead to many advantages, including increased productivity and efficiency, but it can also include several potential ethical trade-offs, such as invasions of privacy, loss of autonomy, and job displacement. This workshop will explore the ethical dimensions of AI in CSCW, building on Good Systems, a UT Grand Challenge. Specifically, the first half of the workshop will focus on the need to design AI to work for all users and to avoid bias through the use of universal design as well as the need for AI and CSCW researchers to interact with policy and legal experts to work together to ensure that AI will be developed in an ethical manner with sufficient consideration of its societal implications, and also that AI will be regulated and legislated in ways that will maximize its benefits to all people.
|Title of host publication
|CSCW 2019 Companion - Conference Companion Publication of the 2019 Computer Supported Cooperative Work and Social Computing
|Association for Computing Machinery
|Number of pages
|Published - 9 Nov 2019
|22nd ACM Conference on Computer-Supported Cooperative Work and Social Computing, CSCW 2019 - Austin, United States
Duration: 9 Nov 2019 → 13 Nov 2019
|Proceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW
|22nd ACM Conference on Computer-Supported Cooperative Work and Social Computing, CSCW 2019
|9/11/19 → 13/11/19
Bibliographical noteFunding Information:
Kenneth R. Fleischmann is a professor in the School of Information at the University of Texas at Austin. He holds a B.A. in Computer Science and Anthropology from Case Western Reserve University and a M.S. and Ph.D. in Science and Technology Studies from Rensselaer Polytechnic Institute. His research focuses on the role of human values in the design and use of information technologies, with a particular emphasis on the ethics of AI. He serves as the Inaugural Chair of the Executive Team for the Good Systems Grand Challenge Initiative at UT-Austin, and his current funded projects include serving as PI of “Field Research with Policy, Legal, and Technological Experts about Transparency, Trust, and Agency in Machine Learning” funded by Cisco Research Center; as Co-PI of “Microsoft Ability Initiative” funded by Microsoft Research; and as Co-PI of “Tackling Misinformation Through Socially Responsible AI” funded by Micron Foundation.
Danna Gurari is an assistant professor in the School of Information at The University of Texas at Austin. Her research interests span computer vision, human computation, crowdsourcing, machine learning, and accessibility, with a focus on designing visual analysis systems that improve people’s quality of life. She completed a postdoctoral fellowship in Computer Science at UT-Austin, PhD in Computer Science at Boston University, and MS in Computer Science and BS in Biomedical Engineering at Washington University in St. Louis. She worked in industry at Boulder Imaging and Raytheon. She was recognized with the Researcher Excellence Award from the Boston University CS department in 2015 as well as with her collaborators for a 2017 Honorable Mention Award at CHI, 2014 Best Paper Award for Innovative Idea at MICCAI IMIC, and 2013 Best Paper Award at WACV.
© 2019 Copyright is held by the author/owner(s).
- Artificial intelligence
- Human values
- Machine learning
- Universal design