New website!

I’m in the process of setting up a brand new website at http://margaretstorey.com/.  See http://margaretstorey.com/ for more recent content (as of November 2016)!

I gave an invited talk at the Vissoft Conference in beautiful Victoria, B.C. (collocated with ICSME2014) on Sept 29th.

Slides are posted here:  http://www.slideshare.net/mastorey/visualization-for-software-analytics


The popularity of software visualization research over the past 30 years has led to innovative techniques that are now seeing widespread adoption by professional software practitioners. But this research has barely kept pace with some of the radical changes occurring in software engineering today. In this talk, I explore current trends in software engineering, including the prevalence of software ecosystems and software delivery as a service, and the emergence of the social coder within a participatory development culture. I will also discuss how the field of software analytics has matured and seeks to support practitioners in improving software quality, user experience and developer productivity through data-driven tasks. Finally, I suggest that software visualization should be playing a bigger role in these recent trends, emphasizing that interactive visualizations are poised to play a critical role in the field of software analytics.

The 22nd ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE 2014) will be held in Hong Kong, China between November 16 and November 22, 2014. Hong Kong is a lively place with a beautiful harbor and landscape. It is a city internationally known for its finance, shopping, and food with a good mix of Eastern and Western cultures. FSE is an internationally renowned forum for researchers, practitioners, and educators to present and discuss the most recent innovations, trends, experiences, and challenges in software engineering.

Papers are due March 16th!   We look forward to your submissions!   Learn more...

We recently conducted a survey with over 1,400 developers to find out how they use Twitter or why they reject it.   The questions we asked were based on an earlier survey with 271 developers and 27 interviews.   We discovered why some love it, why some do not, and also found out what kinds of strategies developers use to improve Twitter use.  Here is a brief snapshot of the results.

Why some developers love Twitter











Why some developers resist Twitter



What strategies should developers on Twitter use







But what do you think?  Do YOU think developers should use Twitter?  Do you love or hate Twitter? Comment below!

Want to read more?  See a longer blog post here  and an even longer paper here, or participate in our latest survey about social media in software development.

The following show the slides and abstract for a keynote presented at SLE (Software Language Engineering) 2012 in Dresden, Germany on September 28th, 2012. 

KEYNOTE TITLE: Addressing Cognitive and Social Challenges in Designing and Using Ontologies in the Biomedical Domain

ABSTRACT: Ontologies can provide a conceptualization of a domain leading to a common vocabulary for communities of researchers and important standards to facilitate computation, software interoperability and data reuse. Most successful ontologies, especially those that have been developed by diverse communities over long periods of time, are typically large and complex. To address this complexity, ontology authoring and browsing tools must provide cognitive support to improve comprehension of the many concepts and relationships in ontologies. Also, ontology tools must support collaboration as the heart of ontology design and use is centered on community consensus.

In this talk, I will describe how standardized ontologies are developed and used in the biomedical and clinical domains to aid in scientific and medical discoveries. Specifically, I will present how the US National Center for Biomedical Ontology has designed the BioPortal ontology library (and associated technologies) to promote the use of standardized ontologies and tools. I will review how BioPortal and other ontology tools use established and novel visualization and collaboration approaches to improve ontology authoring and data curation activities. I will also discuss an ambitious project by the World Health Organization that leverages the use of social media to broaden participation in the development of the next version of the International Classification of Diseases. To conclude, I will discuss the challenges and opportunities that arise from using ontologies to bridge communities that manage and curate important information resources.

The following show the slides and abstract for a keynote presented at MSR 2012 in Zurich, Switzerland on June 3rd, 2012. Also available are an audio and slide capture on https://vimeo.com/43620623 and an unedited audio track))


Social media has revolutionized how humans create and curate knowledge artifacts. It has increased individual engagement, broadened community participation and led to the formation of new social networks. This paradigm shift is particularly evident in software engineering in three distinct ways: firstly, in how software stakeholders co-develop and form communities of practice; secondly, in the complex and distributed software ecosystems that are enabled through insourcing, outsourcing, open sourcing and crowdsourcing of components and related artifacts; and thirdly, by the emergence of socially-enabled software repositories and collaborative development environments.

In this talk, I will discuss how software engineers are becoming more “social” and altruistic, defying the old-fashioned stereotype of the solitary and selfish programmer. I conjecture that media literacy and networking skills will become just as important as technical skills for creating, curating and managing today’s complex software ecosystems and software knowledge. I will also discuss the influence of social media and social networks on software development environments and repositories. I propose that social media is responsible for the shift from a software repository as a “space” that stores software artifacts, to a “place” where developers learn, reuse, share and network.

The convergence of software tools with social media naturally influences the information that can be mined from software repositories, challenging not only the questions that motivate these mining activities, but also the very definitions of what comprises a software repository or even a software programmer. Finally, I will suggest that it is imperative to consider both the positive and negative consequences of how programming in a socially-networked world might impact software quality and software engineering practices. As Marshall McLuhan eloquently said in 1974, “If we understand the revolutionary transformations caused by new media, we can anticipate and control them; but if we continue in our self-induced subliminal trance, we will be their slaves.”

Update: I presented a keynote on this topic at the CSER meeting last October in Toronto. In that talk I summarized the panel discussion for those that could not be at ICSE. The slides from this talk are posted on slideshare: http://www.slideshare.net/mastorey/research-industry-panel-review.

The panel we organized at ICSE 2011 was a great success. There was apparently standing room only at the back of the room, so the topic clearly struck a chord with ICSE attendees.

The presenters each had a unique perspective to offer with very useful advice for graduate students and professors engaging in industrially relevant research.

Some of the presenters’ slides with some comments on the panel by Dr. Jorge Aranda are available here:

The introductory slides for the panel are available here: http://www.slideshare.net/mastorey/icse-2011-research-industry-panel-8784035

(Note: this is cross-posted at Jorge Aranda’s blog; please post your thoughts there. This post is based on the work of its coauthors, Jorge Aranda and Margaret-Anne (Peggy) Storey, as well as of Daniela Damian, Marian Petre, and Greg Wilson.)

Listening to software professionals over the past few years, we sometimes get the impression that software development research began and ended with Fred Brooks’ case study of the development of the IBM 360 operating system, summarized in “The Mythical Man-Month,” and with his often-quoted quip that adding people to a late project only makes it later. Now and then, mentions of Jerry Weinberg (on ego-less programming) and of DeMarco and Lister (on how developers are more productive if they’re given individual offices) pop up, and for the most part, it seems as if the extent of what software development academics have to offer to practitioners is a short list of folk sayings tenuously validated by empirical evidence. The fact that Brooks, Weinberg, DeMarco, and Lister are not academics — or were not at the time of these contributions, as in the case of Brooks — only makes the academic offerings look worse.

And yet, the software development academic community is considerably large and increasingly empirical. The International Conference on Software Engineering (ICSE), its most important gathering, consistently draws a crowd of over a thousand researchers. Researchers mine software repositories, they perform insightful ethnographic studies, and they build sophisticated tools to help development teams become more efficient. Many researchers, from junior Masters students to tenured professors, jump at the opportunity to study and help software organizations. In other words, there is a significant academic offering of results on display. But if we look at the list of ICSE attendees, we discover that industrial participation is very low (less than 20% last year), and there seems to be very little dissemination of scientific findings overall. What is going on? Are we wasting our time studying problems that practitioners do not care about? Or do we have a communication problem? Are practitioners expecting help with intractable problems? And most importantly, how can we change this situation?

To explore these questions, we decided to interview leading practitioners. Over the past few months, we talked to CEOs, senior architects, managers, and creators of organizations and products most of us would recognize and use. We asked them to tell us their perceptions of our field and how they think we could improve our relationships with them. One outcome of these interviews was the organization of a panel at ICSE, where people that straddle the line between research and practice will use insights from these interviews as a starting point to discuss the apparent industry-research gap.

We are still thinking about how to disseminate the observations that our ongoing interviewees have been giving us. For now, we want to broadcast some of the most important points from our conversations here, in blog post format, hoping to give them as much exposure as possible.

Perceptions of software research

For those of us venturing out of the ivory tower to do empirical research, it shouldn’t be a surprise that many practitioners have a general disregard for software development academics. Some think our field is dated, and biased toward large organizations and huge projects. Others feel that we spend too much time with toy problems that do not scale, and as a result, have little applicability in real and complex software projects. Most of our interviewees felt that our research goals are unappealing or simply not useful. This was one of the strongest threads in our conversation: one person told us that our field is this “fuzzy stuff at a distance that doesn’t seem to affect [him] much,” another, that we ignore the real cutting-edge problems that organizations face today, and one more, a senior architect about to make the switch to academia himself, gave a rather scathing critique of the field.

“[I’m afraid] that industrial software engineers will think that I’m now doing academic software engineering and then not listen to me. (…) if I start talking to them and claim that I’m doing software engineering research, after they stop laughing, they’re gonna stop listening to me. Because it’s been so long since anything actually relevant to what practitioners do has come out of that environment, or at least the percentage of things that are useful that come out of that environment is so small.”

Part of the problem seems to be that we have only been able to offer professionals piecemeal improvements. Software development is essentially a design problem, a wicked problem, and it is not amenable to silver bullets (as, ahem, Fred Brooks argued convincingly decades ago). But the immaturity and difficulty of software development still make it a prime domain for the presence and profit of snake oil salesmen — people that are not afraid to advertise their miraculous formulas, grab the money and run. Honest academics, reporting improvements of 10% or 20% for a limited domain and under several constraints, have a hard time being heard above the noise.

Difficulty in applying our findings

The problem with piecemeal improvements has another angle: many professionals can’t be bothered to change their processes and practices for gains as small as 10% or 20%, since overcoming their organizational inertia and forcing themselves to incur significant risks may be more costly than the benefits they’d accrue.

“(…) it would depend in part of how cumbersome your techniques are; how much retraining I’m going to have to do on my staff. (…) I might decide that even if you’re legit and you actually do come up with 15%, that that’s not enough to justify it.”

This puts us in a bit of a quandary as we’re extremely unlikely to come up with any technique that will guarantee a considerable improvement for software organizations. At the same time, they’re extremely unlikely to adopt anything that doesn’t guarantee substantial improvements or that requires them to change their routines significantly. However, there are a few ways out of this problem. One of them is to propose lightweight, low-risk techniques. Another is to aim for organizational change at the periphery, in pilot projects, rather than at the core, hoping that the change will be appealing enough that it will spread through the organization. But it’s an uphill battle nonetheless.

What counts as evidence?

Another, perhaps bigger problem lies in the perception of what counts as valid scientific evidence. For better or worse, software developers have an engineering mindset, and have an idea of science as the calm and reasoned voice of hard data among the cackling of anecdote. The distinction between hard data and anecdote is binary, and hard data, according to most of our interviewees, is quantitative data; anything else is anecdote and should be dismissed.

“without measurements you can’t… it’s all too wishy-washy to be adopted.”

“managers are coin operated in some sense. If you can’t quantify it in terms of time or in terms of money, it doesn’t make much difference to them. (…) I think there does need to be some notion of a numeric or at least an objective measure.”

“So when you’re gonna tell me that I’m wrong, which is a good thing, you know you gotta have that extra ‘yeah, we ran these groups on parallel and guess what, here are the numbers'”

Why is this a problem? Because over the years, we as a community have come to realize that many of the really important software development problems are not amenable to study with controlled experiments or with (exclusively) quantitative data. Ethnographies, case studies, mixed-method studies, and others, can be as rigorous as controlled experiments, and for many of the questions that matter, they can be more insightful — but they don’t have the persuasive aura of a string of numbers or a p value. Faced with this perception, we have two choices. First, to give practitioners what they (think they) want: controlled experiments to the exclusion of everything else (never mind the fact that often these won’t be able to actually answer the questions that matter to professionals in a scientifically sound manner), or second, to push for a better dissemination of our results and methods, making the argument that there’s more to science than trial runs and statistical significance, and helping practitioners distinguish between good and bad science, whatever its methods of choice.

Dissemination of results

Although, from talking to our interviewees, it was clear that the dissemination of scientific results is almost non-existent, this seems to be a problem that we can address more easily than the others. Of course, presenting research findings to non-academics, as our interviewees reminded us, is difficult; you need to be a good storyteller, you need passion, clear data, and a strong underlying argument. To some extent, this is feasible.

In any case, it became evident that academic journals and conferences are not the right venues to reach software professionals overall. Blog posts may help communicate some findings (but it is hard to be heard above the noise), and books could help too (especially if you have Brooks’ writing abilities). Another alternative is intermediate journals and magazines, like IEEE Software and ACM Queue. One interviewee suggested that we should be visiting industry conferences way more often; when a researcher ventures into an industry conference with interesting data, it does seem to generate excitement and good discussions, at the very least.

Areas of interest

We asked our interviewees what questions should we focus on; that is, what problems do they struggle with on a frequent basis that researchers may tackle on their behalf. A few themes arose from their lists of potential problems:

  • Developer issues were very common. These include identifying wasteful use of developer time, keeping older engineers up to date with a changing landscape (an interesting riff on the rather popular research question of bringing new engineers up to speed with the current organizational landscape), identifying productive programmers and efficient ways to assemble teams, overcoming challenges of distributed software development, achieving better effort prediction, learning to do parallel programming well, and identifying mechanisms to spread knowledge of the code more uniformly throughout the organization.
  • Evaluation issues also arose frequently. Essentially, these consist of having academia perform the role of fact checker or auditor of proposals that arise from consultants, opinion leaders, and other influential folks in the software development culture. Many interviewees were curious to find to what extent does agile development work as well as its evangelists claim it works, for instance, but their curiosity also extends to other processes, techniques, and tools.
  • Design issues came up as well. One in particular seemed interesting: figuring out why some ideas within a project die after a lot of effort was spent on them. This could lead into techniques to identify ideas probably doomed to failure early on, so that the team can minimize the resources spent on them.
  • Tool issues were rather popular, and on many of the tools that our interviewees mentioned there is already some good work from our community that hopefully can be turned into tools that will be successfully adopted by the mainstream. Our interviewees were interested in tools that would provide warnings as a developer was to enter a conflicting area of the code, in good open source static analysis tools, in test suite analytics, and in live program analysis tools that scale well.
  • Code issues, though less common, were interesting as well. In particular, studying and providing help in dealing with the blurred line between project code and configuration code (and treating configuration code with the same care and level of tool-set sophistication that we give to project code), and providing a better foundation for higher-level abstractions such as modeling languages.
  • User issues arose more frequently than they seem to in our academic literature. Several of our interviewees wanted to bring user experience to the forefront, and some were concerned that software development skill and user experience gut instinct were rarely found in sufficient quantities in the same professional. One of them wanted to bring the kind of mining techniques that we use to analyze software repositories into an analysis of customer service audio and email data.

So as you can read, there were plenty of interesting research questions brought up by our interviewees. Some of these questions are more approachable than others, some have already been addressed numerous times in research and are therefore now in need of better dissemination of findings.

In summary, the managers, creators, and architects we interviewed confirmed our fear that the software research academic community is extremely disconnected from software practice. This seems to be partly our fault (we often do not work on the issues that practitioners worry about, we rarely reach out to them purposefully), and partly a misconception of what it means to do science and what counts as valid evidence in our domain.

We hope to further explore these initial insights from industry at our upcoming panel at ICSE. We have sought panelists that straddle the line between research and practice to provide their perspectives on what they think compelling evidence would look like to industry, what they consider to be the important questions for academia, to suggest to us how we can more effectively disseminate results and to suggest how we can engage in productive collaborative research that is of benefit to both sides. In the meantime, we welcome your comments on this post! And stay tuned, as we will follow up to summarize the discussion from the ICSE panel.