ICSE 2016 SER&IP workshop

Workshop notes by Dennis Mancl (dmancl@acm.org)

I attended the Software Engineering Research and Industrial Practice Workshop at ICSE 2016 (May 17, 2016). This workshop brings together researchers and practitioners to discuss the current state of Software Engineering (SE) research and Industrial Practice (IP), and advance collaboration to reduce the gap between research and practice.

The workshop website is https://sites.google.com/site/serip2016.

Workshop main keynote: Jim Herbsleb

The first speaker of the day was Jim Herbsleb from Carnegie-Mellon University, and his talk was on "Socio-Technical Coordination". There are many theories about how groups of technical experts (such as software developers) coordinate to solve problems and build systems. This topic is becoming more interesting as there is more distributed open source development (in the "GitHub" model).

A web of engineering decisions with dependencies

Herbsleb's simple model of a big project is -- the team has to make a large number of engineering decisions in order to build a system.

If you model system development this way, then the constraint network defines the "problem" to be solved -- and the "solution" is found by a process of distributed constraint satisfaction.

Most folks try to simplify the problem, because the network and constraints are too big and complicated. We can define some "cut points" in the graph, to break the constraint network into separated subgraphs. Then we can set up smaller teams to work on each subgraph (which we might call a "module").

Each team gets to work on a smaller piece -- and because each team might be geographically colocated and team members have collaborated together before -- the work on the smaller problems are much easier to manage. Teams can also use tools -- collaborative technology -- to work together.

File dependencies instead

Herbsleb went through a number of mathematical arguments to show that we can analyze "dependencies between source files" instead of analyzing dependencies between individual design decisions. This gives us some ideas for analyzing how teams collaborate by looking at the log of source file changes (change history in Git, for example).

If the developers use a specific collaboration technology to work together (such as Internet chat), we can compile a record of who collaborates with whom. We can then compare the history of interfile dependencies to see whether there are any file interdependencies where the file creators should have collaborated... but they didn't talk. This might be one way to find potential system problems that are due to lack of coordination across the project.

It is harder to do this analysis with GitHub -- it is difficult to have reliable collaboration statistics, except by looking for file changes that are "close together in time."

GitHub and reuse culture

GitHub has a side effect that increases complexity and dependencies -- developers have a tendency to link in code from other projects (reusing code). You can get a very dense dependency graph -- and although you might just continue to use an old copy of the code from another project, you will eventually have some pressure to move to more recent code from the imported projects.

Developers use GitHub in several ways. There are some kinds of "information seeking" practices:

Transparency in GitHub

GitHub culture also encourages developers to have some relative "discipline" in their development activities -- both when a developer adds code to his/her own project and when a developer evaluates the submissions of others. It is a result of the "transparency" in the GitHub culture:

This transparency has some benefits. Conflicts become easier to manage. There is an opportunity to send comments and feedback before a commit is made. Also, members of the user community can "advocate" and "rally" for certain changes and improvements -- even over the objections of the main project owner. Herbsleb talked about a case where a few users who proposed a feature change in a software tool, where the change was not agreed to by the code owner. The users proposing the feature change started a Twitter discussion -- and they got hundreds of their friends to do a "+1" on the proposed change in the GitHub project. Eventually, the project owner accepted the feedback and committed the change.

Signaling theory

How do people decide that a particular GitHub project is worthwhile? No one has the patience to review the entire design and code base. Herbsleb points out that "signaling theory" may play a role.

There are certain visible behaviors... some hallmarks that give outsiders a clue that the developers know what they are doing. These "signaling behaviors" are visible indicators (maybe the number of outside contributors to the project, the relative spacing of commits, the lack of bad code churn) -- these behaviors may indicate that the "unobservable qualities" (good team collaboration, solid code base, regular reviews of new feature requests) are actually present in the project.

There is probably a lot more to say about this topic. In the question and answer, several people thought that it would be valuable to have a good manual of practices for interacting with an open source project.

Long-term industry collaboration on software analysis

The paper by Suresh Kothari from Iowa State was next. Suresh has been working for many years on "human aided automation" to do software analysis, transformation, and verification. His tools have been applied to many sizeable software problems in the area of cyber-physical systems:

The biggest lesson for Suresh was -- you have to listen and learn about the problems from industry, instead of just trying to sell companies on the value of your current academic work. You must look at how your research focuses on practical industry problems. It can also help to find some good mentors to learn about how to do effective industry collaboration.

There are a lot of details in the workshop paper about how the research partnerships started and evolved. Three important points:

Other interesting talks

I presented the paper that Steve Fraser (Steve works on university research relations at HP, Inc.) and I co-authored on "Strategies for Building Successful Company-University Research Collaborations." The main message is that a company needs to have an initial plan of the areas of research where collaboration will help, then choose the right university partners and collaboration model, set expectations on both sides, and track the results of the collaboration over time. It is good to make the "ROI" as visible as possible -- including direct impact on business, talent pipeline, and impact on the company's external image.

Andrew McAllister from University of New Brunswick spoke about legacy code modernization, using a novel code parsing approach called "Programmars" (a "grammar" based representation of a program -- see http://www.eaglegacy.com/papers.html for recent papers on this technique). This approach seems to be more flexible for working with legacy code than some of their previous attempts to build tools using standard abstract syntax trees. It is relatively easy to customize a Programmar-based set of tools to work with a mixture of code that may be compiled with different versions of a compiler.

Didar-Al-Alum from University of Calgary presented some work about Technical Debt -- using code from real-world projects to try to validate some hypotheses about the correlations between technical debt, level of developer experience, legacy versus non-legacy projects, and other factors. The study used a combination of code analysis and developer interviews. There was a good correlation between developers with a high level of general development experience and low rates of technical debt in the code they created. Note that there was little correlation "amount of experience on the current project" did not correlate strongly with technical debt. Although the study did not find many other significant results, there are some valuable ideas in the group's experimental design and interview techniques.

Lori Pollock from University of Delaware was the afternoon keynote speaker. She gave a great presentation on how to run good "field studies" of developer behavior. Many of the papers and studies on new tools and techniques in software development are small "controlled studies" in a university lab -- a relatively artificial environment. These controlled studies give useful results, but it can really help to follow up with real-world "field studies". Lori's talk showed a number of examples of field studies -- and the best examples used some automated collection of through tools such as API plug-ins. There are some important issues to consider in the design of these monitoring tools -- use simple logging with timestamps for significant developer events, maintain the anonymity of developers, and don't reveal proprietary information about the code.

Alfonso Fuggetta from CEFRIEL in Italy presented an API-related talk -- a successful attempt to get a number of companies and government agencies to convert to an API-centric development model to build business systems that needed to share information. The project was the "E015 Digital Ecosystem" developed for Expo Milano 2015, and it was successful in enabling information sharing in multiple domains. Transportation information (traffic information, traffic cameras, railway schedules and status, airport flight information, and car parking data) were all made available by companies and public entities to be shared in many different applications -- mobile apps, websites, information kiosks, and so on. A total of 40 applications were published during the Expo Milano 2015, using 100 different APIs that were part of the E015 ecosystem. More information is available at http://www.expo2015.org/archive/en/projects/e015.html.

Carol Woody from the CERT Division of SEI talked about the Security Engineering Risk Analysis (SERA) framework -- a way to address software security risks. The full description of the framework is in an SEI report: http://resources.sei.cmu.edu/library/asset-view.cfm?AssetID=427321. It is a good way to collect together a useful set of security risks. The risks need to be documented along with the "contexts" for the key security issues. This framework helps get security requirements into the software requirements early in the cycle.

Massila Kamalrudin from Technical University Malaysia Melaka presented a talk on the "TestMEReq" system -- a tool for modeling requirements and linking the requirements to "tests". The TestMEReq process is based on Essential Use Cases (a simple use-case-based approach from Larry Constantine in the early 2000s), and Essential User Interfaces (simple user interface prototypes). The tool builds an internal requirements model directly from the structured-text-based use case specs and prototype descriptions. The initial tests are built from the model. As the design evolves, these tests can be expanded to become real software tests.

Marc Schreiber from University of Applied Sciences in Aachen, Germany presented his experiences in working with a medical systems company on requirements problems. The company had a large body of text-based documents that contained technical information that needed to be incorporated into requirements documentation -- and the process of extracting the information manually was very tedious. He and his team were brought in to try to apply some Natural Language Processing tools (and some machine learning) to do the extraction effort. They designed a great process: it used some ideas from the "continuous integration" tools. Their extraction system was very minimalist at the start, but as they made the "language processing pipeline" more sophisticated and as they increased the "training and tuning" of the tools, they were able to get the output of the tools to "converge". This was a hard problem -- much more difficult than the average academic problem -- and the company would not have been able to do the work on their own.

Moritz Beller from Delft University of Technology talked about the design -- and redesign -- of their WatchDog test monitoring plugin for Eclipse. This was also an interesting industry collaboration effort, because they had huge problems early... it was impossible to get companies to trust them. The main lesson from Moritz: If you come to companies with a "solution", they might not even recognize that they have a problem. If you try to get individuals to accept your tool, you will also have problems -- because not everyone uses Eclipse. The solution to the lack of acceptance involved multiple changes: redesign of WatchDog to be more portable (to work with both Eclipse and IntelliJ), many common operations take place on a lightweight server application, and the system uses a large number of open source systems to manage data and run analytics. The paper summarizes a set of seven "guidelines" that other IDE plug-in builders should consider to improve their own toolsets. More information on the WatchDog plugin can be found at this website: http://www.testroots.org.

There was also a short discussion about "open source" and its role in collaborations. I will have more detailed notes available about this session later.


Last modified: May 17, 2016