Qualitative Synthesis of SE Research
Chairs: Daniela Cruzes, Tore Dybå and Per Runeson
Location: Max Bell Room 251
Background:
Synthesizing the evidence from a set of studies that spans many countries and years, and that incorporates a wide variety of research methods and theoretical perspectives is not a trivial task. Research synthesis is a collective term for a family of methods for summarizing, integrating, combining, and comparing the findings of different studies on a topic or research question. Such synthesis can also identify crucial areas and questions that have not been addressed adequately with past empirical research. It is built upon the observation that no matter how well designed and executed, empirical findings from single studies are limited in the extent to which they may be generalized. Research synthesis is, thus, a way of making sense of what a collection of studies is saying.
Session goal:
This year we continue a series of sessions to deepen the knowledge on synthesis of empirical studies in SE. The goal of this session is to discuss research challenges in synthesizing qualitative evidence in ESE with a special focus on case studies.
Development of the session:
The session will have the following structure:
- Presenting a set of relevant techniques for case study synthesis, including thematic synthesis and cross case comparison.
- Open discussion on drawbacks, flaws, and challenges.
- Wrap-up of the Session.
|
Session A2: What are the Important Problems in Our Field?
Chairs: Guilherme Travassos and Tore Dybå
Location: Max Bell Room 251
Background:
What are the important problems in Software Engineering? Are we doing research that has an impact?
Session goal:
To discuss and prioritize the important research questions in Software Engineering accordingly the perspective of ISERN participants.
Development of the session:
At day one, as part of the welcome and introduction session, audience will be invited to write down their one, top burning research question and put it on a board during the first day. To motivate activities and give the discussion perspective, a short motivational material will be distributed. The important questions will be collected the next morning and in the session, these will be the questions to discuss and prioritize. Audience will be organized in groups to work out the questions. Then a summary will be produced.
|
Session B2: Software Assurance, Neglected or Unnecessary?
Chairs: Dan Port, Yuko Miyamoto and Haruka Nakao
Location: Max Bell Room 252
Background:
Recent work at JPL indicates that different groups of stakeholders have significantly different ideas about what constitutes SA activities as well as different expectations of their expected benefits and outcomes. Such differing perspectives on SA are both pervasive and persistent. There is a need to establish clarity on what activities constitute SA, the conditions in which these are needed or desirable, and the means for demonstrating their benefit. To this end an extensive empirical study was conducted at JPL with participation from JAXA and the NASA assurance community at large. The primary outcome of this study was a proposed a new definition and "value proposition" for SA, meant to clarify the nature of SA and its tangible expected value to software projects.
Session goal:
To stimulate interest and collaboration activities in utilizing the proposed new SA definition and "value proposition" as a unifying principle for SA operations and research going forward. The expected outcome of the session is to establish, clarify, and prioritize a list of "fundamental" research opportunities in SA.
Development of the session:
The session starts with an introduction to SA. It is followed by an attendee interactive discussion with panel of SA practitioners and researchers of proposed new SA definition and value proposition. Finally, from a brainstorm on research questions, research suggestions and opportunities to address questions are identified and prioritized. |
Session A3: Great debate
Chair: Mike Barker
Location: Max Bell Room 251
Background:
Resolved: Using Cloud Computing means End Users don't need Empirical Software Engineering. Depending on who you listen to, cloud computing means never having to worry about programming, software, maintenance, backups, all of that stuff anymore! Just push all of your work into the cloud, access it anywhere and anytime you like, and everything will be wonderful! Right? So... does this mean that end-users and corporate cloud users can quit worrying about empirical software engineering?
Session Goal:
Share ideas and thinking about how empirical software engineering fits into an environment where most computing is done "in the cloud." What kind of "empirical software engineering literacy" does cloud computing require from its end users? Can they really just ignore everything, or does using cloud computing require them to pay attention to certain specific types of research and results?
Development of the session:
1. We'll start by assuming that cloud computing really is the answer to all our problems, and in teams, consider how much using cloud computing reduces the need for end users to understand empirical software engineering models and results.
2. Then we'll consider what empirical software engineering knowledge is needed by end users and cloud system developers and providers, and what research studies need to be done in the cloud environment.
3. We'll summarize this as challenges to ISERN that cloud computing poses. |
Session B3: Empirical Approaches to Support Decision Making in Industry
Chairs/Panelists: Pete Rotella, Brian Robinson, Nachi Naggappan and Audris Mockus
Location: Max Bell Room 252
Background:
The role of measurement-based decision making has dramatically increased in the corporate software development environment over the last decade. Many of the measures are based on the data from corporate issue tracking and software development databases, much as the underlying data in the empirical study of software engineering. However, the goals of the measurement in industry are substantially different as are the standards of what constitutes valid evidence.
Session goal:
Share experiences of software quality and productivity measures that are based on corporate databases including software development, sales, and services. Explain how and why the measures were designed and are used to make business and development decisions at the levels of a developer, a project, and of entire corporation. The session also outlines industry needs to academic participants.
Development of the session:
Brief statements by panelists followed by general discussion. Each panelist:
1. Gives the primary objectives of such measurement programs in their context.
2. Outlines the approaches that worked in the past and present existing and future challenges.
3. Provides examples of what is accepted as valid evidence in a particular industry context.
4. Outlines challenges that remain. Translating the above into a language that participants from academia could understand (and act upon). |