When I attended Stanford, many moons ago, the Computer Science curriculum did not, in and of itself, require a class in ethics. Of course, distribution requirements encouraged taking classes far and wide from Computer Science and certainly some could take philosophy classes. In fact, Symbolic Systems was a major that combined elements of Computer Science and Philosophy into a more cohesive major that included elements of ethics and computing.
The Symbolic Systems program is the study of "the science of the mind." At the intersection with philosophy, it examines the relationship between humans and computers, incorporating the areas of computer science, psychology, and linguistics.
In my experience many techies avoided these classes. But, with a broad education that required reading works by many renowned authors, and arguing values, politics, policy, and economics with students and professors alike, the opportunity to build a moral and ethical framework for yourself and your profession was provided - regardless of what profession you might choose.
Neil, in this video, shows how people conflate two key concepts when they worry about AI ethics:
- Ethical Intent - are we planning to do the right thing? Are we intending to use AI in a way that aligns with our organization and society?
- Thoughtful Implementation - are we doing things correctly, assuming that we had ethical intent. Do our practices improve the chances that we aren't subverting our goals with sloppy execution.
And if you read the press about AI and ethics, these are precisely the two things everyone worries about - the malicious intent of others, and the thoughtless implementation of our friends.