ICSE Main Conference Keynotes
Mike Milinkovich
Open Collaboration, the Eclipse Way
Open Collaboration, the Eclipse Way
Software is eating the world, and open source is eating the software world. In almost every facet of human endeavour, organizations are learning that they must master software development if they are going to continue to succeed. At the same time, it is becoming apparent that open source is the most successful model for developing and delivering software. Open source is supporting new business models, open innovation, and far greater levels of productive collaboration, giving rise to new ways of organizing the development of technology.
The Eclipse community started 15 years ago as a single project, and the Eclipse IDE has been a huge part of our past and currect success. More recently we have been bringing the Eclipse Way to entirely new technology areas ranging from the internet of things to language runtimes, from simulation engines to geospatial. In doing so, we have evolved mature processes to enable academia, research organizations, industry, and hobbyists succeed with open innovation.
Short bio
Mike Milinkovich has been involved in the software industry for over thirty years, doing everything from software engineering, to product management to IP licensing. He has been the Executive Director of the Eclipse Foundation since 2004. In that role he is responsible for supporting both the Eclipse open source community and its commercial ecosystem. Prior to joining Eclipse, Mike was a vice president in Oracle’s development group. Other stops along the way include WebGain, The Object People, IBM, Object Technology International (OTI) and Nortel.
Mike sits on the Board of the Open Source Initiative (OSI), on the Executive Committee of the Java Community Process (JCP), and is an observer and past member of the Board of OpenJDK.
To download the slides, click here
Moshe Y. Vardi
The Automated-Reasoning Revolution: From Theory to Practice and Back
For the past 40 years computer scientists generally believed that NP-complete problems are intractable. In particular, Boolean satisfiability (SAT), as a paradigmatic automated-reasoning problem, has been considered to be intractable. Over the past 20 years, however, there has been a quiet, but dramatic, revolution, and very large SAT instances are now being solved routinely as part of software and hardware design. In this talk I will review this amazing development and show how automated reasoning is now an industrial reality.
I will then show describe how we can leverage SAT solving to accomplish other automated-reasoning tasks. Counting the the number of satisfying truth assignments of a given Boolean formula or sampling such assignments uniformly at random are fundamental computational problems in computer science with applications in software testing, software synthesis, machine learning, personalized learning, and more. While the theory of these problems has been thoroughly investigated since the 1980s, approximation algorithms developed by theoreticians do not scale up to industrial-sized instances. Algorithms used by the industry offer better scalability, but give up certain correctness guarantees to achieve scalability. We describe a novel approach, based on universal hashing and Satisfiability Modulo Theory, that scales to formulas with hundreds of thousands of variable without giving up correctness guarantees.
Short bio
Moshe Y. Vardi is the George Distinguished Service Professor in Computational Engineering and Director of the Ken Kennedy Institute for Information Technology at Rice University. He is the recipient of three IBM Outstanding Innovation Awards, the ACM SIGACT Goedel Prize, the ACM Kanellakis Award, the ACM SIGMOD Codd Award, the Blaise Pascal Medal, the IEEE Computer Society Goode Award, the EATCS Distinguished Achievements Award, and the Southeastern Universities Research Association's Distinguished Scientist Award.
He is the author and co-author of over 500 papers, as well as two books: "Reasoning about Knowledge" and "Finite Model Theory and Its Applications". He is a Fellow of the Association for Computing Machinery, the American Association for Artificial Intelligence, the American Association for the Advancement of Science, the European Association for Theoretical Computer Science, the Institute for Electrical and Electronic Engineers, and the Society for Industrial and Applied Mathematics. He is a member of the US National Academy of Engineering and National Academy of Science, the American Academy of Arts and Science, the European Academy of Science, and Academia Europaea. He holds honorary doctorates from the Saarland University in Germany, Orleans University in France, and UFRGS in Brazil. He is the Editor-in-Chief of the Communications of the ACM.
Plenary IEEE CS Harlan D. Mills Award
Pamela Zave
Using Software Engineering to Teach Networking
In 1997 the Internet was a world-changing phenomenon. In the 20 years since then, success has brought many new challenges and many new implementation technologies. As with so many other applications of computing, networking is now a software problem.
At the same time, networking is struggling to become a mature academic discipline within computer science. The conventional wisdom is that this is due to the overwhelming scale and utility of the Internet, which makes deployment of disruptive technologies unlikely. As a result, respectable research is confined to short-term problems, and the research community is continually depleted by brain drain to industry.
In this talk I take a contrarian view. As a recent discipline gathering people from disparate engineering backgrounds to work with the great phenomenon of 1997, networking lacks the crucial "gene" for appreciation of the importance of precise functional description. Without precise description, there is no useful abstraction or generalization. Worse yet, researchers are assuming that the Internet still has the simple monolithic architecture of 1997, without realizing that today's network scarcely resembles it. The Internet is now a composition of a vast variety of special-purpose networks.
The talk introduces a compositional theory of network architecture, focusing on the formalization and study of composition operators on networks. The theory provides a precise description of the Internet as it is working now. The theory is being developed to study properties and compositional reasoning about them, which leads to architectural design principles. The theory could have significant impact on the future development of network software. It should encourage researchers to find practical solutions to long-term unsolved problems, such as full, affordable support for the Internet of Things.
Teaching is the best way to introduce a new intellectual gene into the next generation's gene pool, so the talk concludes with my experience teaching compositional network architecture to a graduate class at Princeton.
Short bio
Pamela Zave received the A.B. degree in English from Cornell University and the Ph.D. degree in computer sciences from the University of Wisconsin--Madison. She has held positions at the University of Maryland and Bell Laboratories, and is now with AT&T Laboratories--Research and Princeton University.
Dr. Zave has been an ACM Fellow since 2002 and an AT&T Fellow since 2009. Her work on the foundations of requirements engineering was the basis of the REVEAL method taught by Praxis Systems Ltd., and has been recognized with three Ten-Year Most Influential Paper awards. She led a group that developed the service software for two successful large-scale IP-based telecommunication systems based on her Distributed Feature Composition (DFC) architecture. DFC has been incorporated into the Java Community Process standard for SIP Servlet Containers, and into an open-source tool suite for building SIP-based services. Her work on telecommunications also led to 28 patents, an AT&T Strategic Patent Award, and three Best Paper awards. In her work on protocol verification, she produced the first formal models of two important protocols, SIP and Chord, applying formal methods to debug, improve, and verify them. Her current research interests focus on the architecture and composition of networks and distributed systems.
Dr. Zave has been chair of IFIP Working Group 2.3 on Programming Methodology, vice-chair of ACM SIGSOFT, and secretary-treasurer of ACM SIGSOFT. She has served as an associate editor of IEEE Transactions on Software Engineering, ACM Transactions on Software Engineering and Methodology, Requirements Engineering, and ACM Computing Surveys. She co-organized two Dagstuhl seminars and was the program-committee chair or co-chair of four conferences, including the Second IEEE International Symposium on Requirements Engineering and the Tenth International Formal Methods Europe Symposium. She has also given invited lectures at over 40 conferences and summer schools.
To download the slides, click here
Plenary SIGSOFT Outstanding Research Award
Daniel Jackson
The Alloyed Joys of Software Engineering Research
Short bio
Daniel Jackson is Professor of Computer Science at MIT, a MacVicar teaching fellow, and Associate Director of the Computer Science and Artificial Intelligence Laboratory, MIT’s largest laboratory. He is the lead designer of the Alloy modeling language and author of Software Abstractions: Logic, Language, and Analysis (MIT Press, 2nd ed., 2012). He was chair of the National Academies study Software for Dependable Systems: Sufficient Evidence? (2007). His research currently focuses on a new approach to software design, on new programming paradigms, and on cybersecurity.
To download the slides, click here
Plenary MIP 2007
Feedback-directed Random Test Generation 10 Years Later: Perspectives from University and Industry
Carlos Pacheco, Shuvendu K. Lahiri, Michael D. Ernst, and Thomas Ball
Abstract:
In the ICSE'2007 paper, we presented a "technique that improves random test generation by incorporating feedback obtained from executing test inputs as they are created. ... The result of the execution determines whether the input is redundant, illegal, contract-violating, or useful for generating more inputs. The technique outputs a test suite consisting of unit tests for the classes under test." The Randoop tool(s) implemented the technique of feedback-directed random test generation for object-oriented programs in C# and Java languages.
In this talk, we will review the technique, the tool(s) and discuss their impact in industry and on software engineering research. We will describe the use of Randoop for testing various components of the .NET framework at Microsoft. We provide perspectives on how this work fits into the broader landscape of automatic test-generation research, and a few lessons learnt that may perhaps be applicable to other software engineering research projects.
Authors short bio
Carlos Pacheco is currently a tech lead at Google, where his focus is large-scale verification of production systems. His team has built one of the largest near-real time verification systems at Google, which checks for correctness of ads served across Google properties. Carlos is also a fine artist focusing on realism (http://adanfineart.com), and his portraiture has been recognized at a national level, most recently for his painting Mazatleca, which was a winner in the Portrait Society of America's 2016 annual competition.
Dr. Shuvendu Lahiri is a senior researcher in the Research in Software Engineering (RiSE) team at Microsoft Research, Redmond. He is interested in rigorous methods for improving the verification, testing and code-review of production software. His current focus lies in differential program analysis and angelic verification, the latter shipped with Windows driver verifier. He has worked on logics and decision procedures, algorithms for verifying heap-manipulating programs, abstraction techniques for hardware and software. Shuvendu holds a PhD from Carnegie Mellon University and a B.Tech. from IIT Kharagpur. His PhD dissertation on indexed predicate abstraction received the ACM SIGDA outstanding thesis award. More information can be found at http://research.microsoft.com/~shuvendu.
Michael D. Ernst is a Professor in the Computer Science & Engineering department at the University of Washington.
Ernst's research aims to make software more reliable, more secure, and easier (and more fun!) to produce. His primary technical interests are in software engineering, programming languages, type theory, security, program analysis, bug prediction, testing, and verification. Ernst's research combines strong theoretical foundations with realistic experimentation, with an eye to changing the way that software developers work.
Ernst is an ACM Fellow (2014) and received the inaugural John Backus Award (2009) and the NSF CAREER Award (2002). His research has received an ICSE Most Influential Paper Award (2017), an ACM SIGSOFT Impact Paper Award (2013), 8 ACM Distinguished Paper Awards (FSE 2014, ISSTA 2014, ESEC/FSE 2011, ISSTA 2009, ESEC/FSE 2007, ICSE 2007, ICSE 2004, ESEC/FSE 2003), an ECOOP 2011 Best Paper Award, honorable mention in the 2000 ACM doctoral dissertation competition, and other honors. In 2013, Microsoft Academic Search ranked Ernst #2 in the world, in software engineering research contributions over the past 10 years. In 2016, AMiner ranked Ernst #3 among all software engineering researchers ever.
Dr. Ernst was previously a tenured professor at MIT, and before that a researcher at Microsoft Research.
More information is available at his homepage.
Thomas (Tom) Ball is a principal researcher and manager at Microsoft Research. Tom initiated the influential SLAM software model-checking project with Sriram Rajamani, which led to the creation of the Static Driver Verifier tool for finding defects in Windows device drivers. Tom is a 2011 ACM Fellow for “contributions to software analysis and defect detection.” As a manager, he has nurtured research areas such as automated theorem proving, program testing/verification, and empirical software engineering. His current focus is CS education and the Microsoft MakeCode platform for programming with physical computing.
Tom Ball
Physical Computing for Everyone
Physical Computing for Everyone
Thanks to Moore’s Law, embeddable microcontroller-based devices continue to get cheaper, faster, and include more integrated sensors and networking options. In 2016, the BBC and a host of technical partners, including Microsoft, delivered such a physical computing device, the micro:bit, to every 5th grader in the UK. Microsoft Research helped to make the micro:bit easy to program for novices. The non-profit Micro:bit Education Foundation (microbit.org), of which Microsoft is a founding partner, was recently created to take the micro:bit global. Over the last year, Microsoft has invested in a new web-based programming platform for physical computing in education, called Microsoft MakeCode (www.makecode.com). In this talk, I’ll describe the design principles behind the platform and our experience with it to date.
Short bio
Thomas (Tom) Ball is a principal researcher and manager at Microsoft Research. Tom initiated the influential SLAM software model-checking project with Sriram Rajamani, which led to the creation of the Static Driver Verifier tool for finding defects in Windows device drivers. Tom is a 2011 ACM Fellow for “contributions to software analysis and defect detection.” As a manager, he has nurtured research areas such as automated theorem proving, program testing/verification, and empirical software engineering. His current focus is CS education and the Microsoft MakeCode platform for programming with physical computers.