The SPLASH conference is a programming languages and software engineering conference sponsored by ACM SIGPLAN and SIGSOFT. Prior to 2010, this conference was known as OOPSLA (Object Oriented Programming, Systems, Languages, and Applications). The conference was renamed in 2010 to indicate a scope that goes beyond just “object oriented” technologies. It is a natural evolution -- programming language research has expanded over time to include more work on functional languages, dynamic languages, program verification of academic and practical languages, alternative programming paradigms, and performance. This has worked out well -- a broader scope than “object oriented programming and systems” has helped keep the conference relevant.
SPLASH 2017 was held from Oct. 22 to Oct. 27, 2017 in Vancouver BC, Canada. I attended this conference as the main organizer for the Escaped From the Lab workshop.
I didn’t attend the entire conference, but I did spend time in my workshop, the SPLASH-E program (educator forum), and the first couple of days of the main conference. There were a lot of papers on “typing” -- especially the idea of “gradual typing” (which, according to a couple of papers, is not dead). I saw a few interesting poster papers on verification techniques and formal specifications.
The SPLASH-I sessions had several neat technologies to show off.
Chris Granger (Kowada) gave a presentation inspired by his work in developing his company’s next generation code editor, Eve. His main theme was that we use computer programming to “augment humans” -- and that this is a powerful perspective for developing new products.
Chris’s main complaint with modern programming languages and software development environments is that they are very unforgiving -- they don’t really have “empathy” for the user. Simple tasks -- writing a simple programming script or installing a new package -- require a lot of specialized knowledge:
In the old days, computers weren’t very powerful, so we forced users to learn the computer’s way of doing things, instead of using the computer to adapt to the users’ needs.
Chris used a “woodworking” metaphor. In woodworking, there are a number of different tools for shaping wood: chisels, planes, saws. A small set of “composable tools” are how we build with wood. Also, in a woodworking shop, you are always making small ad hoc tools (called “jigs”) to assist in doing special tasks.
In computer programming, it isn’t clear what the “stuff” of programming is (the equivalent of wood in woodworking). Our tools are often way too complicated, and it costs too much to make tailored small tools for special tasks.
Chris also talked about cooking and recipes. He claims that many recipes are a combination of “imperative” and “declarative” approaches. There is some of the step-by-step instruction approach that is used in many programming languages, but we actually “declare” many of the ingredients... we don’t talk about the complete end-to-end process (grinding wheat and churning butter). Also, recipes are very adaptable -- a cook can leave out ingredients that you don’t like (cilantro or hot peppers), or adjust the seasoning or texture to suit his/her preferences.
This kind of flexibility is lacking in programming -- especially if you are using code libraries. You usually don’t have the option of changing the library.
Chris says we need development environments that have a mix of code and data, and that have the ability to remix and repurpose some of the code/data elements -- somewhat along the lines of the old Apple Hypercard system back in the 1990s.
The examples from the Eve programming environment http://witheve.com, show the ability of a simple language and development environment to build up an application inside of an “exploratory environment” -- users get immediate feedback on what is possible by clicking on elements of the model. This system clearly has some direct connections back to David Ungar’s Self programming language and the “Alternate Reality Kit” by Randall Smith and David Ungar -- from the late 1980s.
Crista Lopes (Univ. of California-Irvine) spoke about “Objects in the Age of Data” -- a reflection on how objects work to handle our current obsession with data.
Crista’s first observation is that OO languages are a story of boom and bust. The “peak” of development of new object oriented programming languages was in 1995. Since 2000, we have been in the “bust” phase -- more new language work has been in functional languages such as Scala, F#, Clojure, Elm, and Julia. She observed that in the 1980s and 1990s, most of the interesting systems development was in interactive PC applications with GUI interfaces -- a natural place to use objects. In the early 2000s, the move was to web development and client-server, and scripting languages have been a better fit. And now in the age of distributed computing and cloud, most of the interesting work is in “data processing apps” -- functional programming languages are better at specifying data transformations and generating efficient parallel code. We aren’t using “long lived objects” as much as before -- we operate on transient data instead.
Crista still finds objects to be a useful model, especially for systems where we need to represent “state”. The state information might be information in a database, or it might be objects in a discrete event driven simulation program. Crista sees three different approaches for working with data in an OO system:
Crista thinks that there will be more OO languages... focusing on different data interaction approaches. She described one possible direction -- by showing some examples of a new small programming language she is working on in her research at UC-Irvine. It hasn’t been released yet, but it tries to address several problems related to the security and reliability of code.
This “programming style” is similar to the Actor programming style (active objects), but it is also similar to the old idea of tuple spaces. The dynamics are constrained... objects are only reclassified once per time tick.
(Note: Crista is a proponent of the idea that useful programming languages always place “constraints” on the kinds of programs that developers can create -- and that this is a good thing. With the right constraints, the resulting programs will be more readable, easier to modify, and higher quality.)
We had 7 people in an all-day workshop on Escaped From the Lab - a workshop that explored the sources of innovation as well as what needs to be done to convert ideas into valued innovations. This is a topic we have addressed in previous workshops, but our group had some especially interesting insights this time.
The main discussions centered on these questions:
On the question of “how invention and innovation are different”, we concluded that inventions only need to be “new” and “cool”. On the other hand, innovations have to be something that is useful to society. In addition, innovations might be a composite of a number of inventions -- some new and some not-so-new. The innovation process involves some invention and some integration of pieces of technology into a useful product or ecosystem.
The topic of “ecosystem” came up several times in the workshop. Ecosystems are mostly “good”, because one of our innovations will often have more value to customers when it can be employed in combination with other components, applications, and services. On the other hand, there is a lot of frustration with some popular frameworks because they seem to be designed to create customer “lock-in”. We moaned about various popular cloud technology ecosystems: Amazon Web Services (AWS), Apple’s iCloud and related technologies, Google’s cloud, Android cloud technology, and so on. AWS, for example, makes it easier to build prototype systems with a lot of cool features, but it always feels like you are working on a prototype -- never progressing to a real solid production system. And if you want to take advantage of components and services outside of AWS, it isn’t easy. Each ecosystem makes it hard to switch to another ecosystem.
We discussed the lifecycle of new innovations from brainstorm to product -- and how well it is working outside of the old-fashioned single-company environment of “big industrial research lab to product line organization”.
The most interesting discussion was around the issue of “keeping the innovations going”. In today’s fragmented and chaotic product development universe, the creators of an innovative product need to look for both funding and visibility: attracting new venture capital funding or even crowd funding to support the development and support team. For some products, product teams can use distribution models like “freemium model” and “trusted tester model” to get some extra funding as well as building a base of loyal users. The marketing strategy for a product must include outreach to conventional media, “influencers”, and other social media channels.
One useful side discussion in the workshop -- what is good advice to experts in programming language technology? The basic story is cautionary:
For more information about the 2017 Escaped From the Lab workshop, see the Workshop Final Report.
I enjoyed the presentations in the SPLASH-E session this year. Many of the presenters were addressing challenges in their software engineering courses -- a lot less stuff this year about pedagogy and technology choices for first year software courses.
The most interesting talk was by Elisa Baniassad from Univ. of British Columbia. She has come up with a very clear visualization that she uses in teaching about the Liskov Substitution Principle in software design. This is a tricky topic, because it is easy for students to get confused about what it means for subclass methods to allow broader preconditions and narrower postconditions (but not the opposite). Elisa’s visualization is a simple heuristic based on a “smiley face” -- wider at the top (the inputs or preconditions to the method) and narrower at the bottom (the outputs or postconditions). I plan to use this idea when I update the inheritance heuristics slides in my OO Design Heuristics tutorial.
There was a good SPLASH-E papers about finding appropriate course projects for masters-level graduate students to work on in software courses. Although some professors attempt to have students make updates to real-world open source projects, it can be very difficult for students to succeed, because they usually don’t get the chance for face-to-face communication with “core team experts” for the projects. The paper by Zhewei Hu (grad student at North Carolina State Univ.) is about how NCSU has sponsored their own open source project -- Expertiza. Expertiza is a peer-assessment tool managed by an NCSU staff group. Since it was locally developed, the students working on it had a much easier time interacting with the package’s core team. Over a period of five years, there have been 313 course projects to make improvements to Expertiza (involving some refactoring and adding new feature code). About 35% of these student projects were “accepted” and merged into the code base.
I assisted Steve Fraser (Innoxec) with a panel session for the 50th anniversary of Simula 67. Steve put together a group of diverse experts: the three SPLASH keynote speakers (Chris Granger, Crista Lopes, and Lera Boroditski from UCLA) plus a Microsoft Research veteran (Sumit Gulwani) working on end-user programmability in Excel and one of the pioneers of the R language (Robert Gentleman from 23andMe).
Steve and I will be working on a detailed writeup of the panel results, but in the meantime, we can say that the discussion was a lot different (and more interesting) than we expected.
One of the most interesting ideas that most of the panelists agreed on: there may be places where “multi-modal” communication with computers can help with ambiguity in the communication process.
I had an interesting side discussion with Gary Miller of UTS (Sydney, Australia), Crista Lopes from UC-Irvine, and Igor Peshansky from Google (NYC). The discussion was about programming languages that have long term success and usefulness in industry. Gary expressed a preference for flexible type systems, such as the structural typing in the Go language (from Google). Crista had already expressed a preference for programming languages that put serious “constraints” on programmers, and Igor and I thought that for 500-person projects, it’s better to have “nominal typing” (type checking based on the actual name of the type as opposed to the contents of the structure/class). It makes manual code inspection and tool-based code checking much easier. My thinking is that when using dynamic languages like Python, people reading complex code can easily be confused. It is really easy to write code where the types of the variables are difficult for other programmers to determine by context.
Igor had praise for Microsoft’s work on LINQ - a database access component for the .NET framework. He said that “Microsoft has done a good thing” by creating LINQ and including support for LINQ directly within the C# language. This points back to Crista’s discussion in her keynote talk about the increasing importance of data in today’s application environment.
SPLASH attendance has been pretty stable for several years... about 550 this year. Vancouver is always a good site to attract attendees from both Europe and Asia. The conference attendees are a pretty young crowd -- about 40% students. Only about 20% of the attendees have industry affiliation.
In the future, the research papers of SPLASH (which is known as the OOPSLA research track), will be journal articles -- this makes university folks very happy. OOPSLA has always been a “good conference to publish”, but not all university committees have been recognizing that an OOPSLA paper is the equivalent of a physics journal paper. The OOPSLA research papers, as well as some papers from other major ACM SIGPLAN conferences, will be published in the new journal Proceedings of the ACM on Programming Languages (PACMPL).
SPLASH 2018 will be held October 14-18, 2018 in Boston, at Northeastern University. See https://conf.researchr.org/track/splash-2018/splash-2018-OOPSLA for more details.