From the early attempts in the late 80s (such as the MAIA project) to the most recent breakthroughs in applications of deep learning, the human kind dreams of building machines capable of learning new tasks, adapting to the environment, and evolving. Yet this exploration poses important computational, practical and ethical challenges. Failure to properly address these challenges in such software intensive systems can lead to catastrophic consequences. Consider, for example, the recent human toll incidence caused by the $47-million Michigan Integrated Data Automated System (MiDAS) (see Broken: The human toll of Michigan’s unemployment fraud saga), or the recent finding that simple tweaks can fool neural networks in identifying street signs (see Robust Physical-World Attacks on Deep Learning Visual Classification).
The increasing concern of machine learning impacting people’s lives found a strong advocate in Prof. David Parnas, who expressed his concern in an ACM communication article. These challenges are also reflected in new IEEE standardization initiatives. With data science and deep learning becoming increasingly pervasive in the contemporary world, it is now imperative to engage software engineers and machine learning experts in in-depth conversations about the necessary perspectives, approaches, and roadmaps to address these challenges and concerns.
In particular, we are interested in (but not limited to) discussing the following topics concerning software-intensive machine learning applications “in the wild”:
- Architecture and software design
- Model/data verification and validation
- Change management
- User experience evaluation and adjustment
- Privacy, safety, and security issues
- Ethical concerns
We encourage and welcome experts from all sub-fields of software engineering and machine learning to participate in such a discussion.