May 2022 To Do

Reviewed by Greg Wilson / 2022-05-14
Keywords: Editorial

We’re still working on the videos from our April 27 lightning talks, so I’ve taken some time to add 29 recent papers to our review queue:

  • Abou Khalil and Zacchiroli: “The General Index of Software Engineering Papers”
  • Ait et al: “An Empirical Study on the Survival Rate of GitHub Projects”
  • AlOmar et al: “Code Review Practices for Refactoring Changes: An Empirical Study on OpenStack”
  • Asare et al: “Is GitHub’s Copilot as Bad As Humans at Introducing Vulnerabilities in Code?”
  • Bendrissou et al: “‘Synthesizing Input Grammars’: A Replication Study”
  • Bogner and Merkel: “To Type or Not to Type? A Systematic Comparison of the Software Quality of JavaScript and TypeScript Applications on GitHub”
  • Caddy et al: “Is Surprisal in Issue Trackers Actionable?”
  • de Souza Santos and Ralph: “Practices to Improve Teamwork in Software Development During the COVID-19 Pandemic: An Ethnographic Study”
  • Foidl et al: “Data Smells: Categories, Causes and Consequences, and Detection of Suspicious Data in AI-based Systems”
  • Furia et al: “Applying Bayesian Analysis Guidelines to Empirical Software Engineering Data: the Case of Programming Languages and Code Quality”
  • Galappaththi et al: “Does This Apply to Me? An Empirical Study of Technical Context in Stack Overflow”
  • Gavidia-Calderon et al: “Quid Pro Quo: An Exploration of Reciprocity in Code Review”
  • Grotov et al: “A Large-Scale Comparison of Python Code in Jupyter Notebooks and Scripts”
  • Härtel and Lämmel: “Operationalizing Threats to MSR Studies by Simulation-Based Testing”
  • Hellman et al: “Characterizing User Behaviors in Open-Source Software User Forums: An Empirical Study”
  • Kudrjavets et al: “Do Small Code Changes Merge Faster? A Multi-Language Empirical Investigation”
  • Kumar and Vidoni: “On the Developers’ Attitude Towards CRAN Checks”
  • Leelaprute et al: “Does Coding in Pythonic Zen Peak Performance? Preliminary Experiments of Nine Pythonic Idioms at Scale”
  • Lüders et al: “Beyond Duplicates: Towards Understanding and Predicting Link Types in Issue Tracking Systems”
  • Nguyen and Nadi: “An Empirical Evaluation of GitHub Copilot’s Code Suggestions”
  • Ragkhitwetsagul and Paixao: “Recommending Code Improvements Based on Stack Overflow Answer Edits”
  • Rao et al: “Comments on Comments: Where Code Review and Documentation Meet”
  • Shome et al: “Data Smells in Public Datasets”
  • Storey et al: “How Developers and Managers Define and Trade Productivity for Quality”
  • Truong et al: “The Unsolvable Problem or the Unheard Answer? A Dataset of 24,669 Open-Source Software Conference Talks”
  • Ünal et al: “Investigating the Impact of Forgetting in Software Development”
  • Veloso and Hora: “Characterizing High-Quality Test Methods: A First Empirical Study”
  • Wattenbach et al: “Do You Have the Energy for This Meeting? An Empirical Study on the Energy Consumption of the Google Meet and Zoom Android apps”
  • Zhang et al: “Code Smells for Machine Learning Applications”

These are in addition to 23 papers that have been awaiting for months (in some cases more than a year):

  • Andersen et al: “Adding Interactive Visual Syntax to Textual Code”
  • Bi et al: “Architecture Information Communication in Two OSS Projects: the Why, Who, When, and What”
  • Blackwell et al.: “Fifty Years of the Psychology of Programming”
  • Decan and Mens: “Lost in Zero Space: An Empirical Comparison of 0.y.z Releases in Software Package Distributions”
  • Decan and Mens: “What Do Package Dependencies Tell Us About Semantic Versioning?”
  • Dias et al: “What Makes a Great Maintainer of Open Source Projects?”
  • Farzat: “Evolving JavaScript Code to Reduce Load Time”
  • Feal et al: “Angel or Devil? A Privacy Study of Mobile Parental Control Apps”
  • Ferreira et al: On the (Un-)adoption of JavaScript Front-End Frameworks”
  • Hoda: “Socio-Technical Grounded Theory for Software Engineering”
  • Imam and Dey: “Tracking Hackathon Code Creation and Reuse”
  • Kuttal et al: “Visual Resume: Exploring Developers’ Online Contributions for Hiring”
  • Lamba et al: “Heard It Through the Gitvine: an Empirical Study of Tool Diffusion Across the NPM Ecosystem”
  • May et al: “Gender Differences in Participation and Reward on Stack Overflow”
  • Palomba et al: “Beyond Technical Aspects: How Do Community Smells Influence the Intensity of Code Smells?”
  • Rahman et al: “Why are Some Bugs Non-Reproducible? An Empirical Investigation using Data Fusion”
  • Rahman et al: “Security Smells in Ansible and Chef Scripts”
  • Turk: “SDFunc: Modular Spreadsheet Design with Sheet-defined Functions in Microsoft Excel”
  • Vidoni: “Evaluating Unit Testing Practices in R Packages”
  • Vidoni: “Understanding Roxygen package documentation in R”
  • Yang et al: “Do Developers Really Know How to Use Git Commands? A Large-scale Study Using Stack Overflow”
  • Young et al: “Which Contributions Count? Analysis of Attribution in Open Source”
  • Zerouali et al: “On the Usage of JavaScript, Python, and Ruby Packages in Docker Hub Images”

If you’re interested in summarizing one or more of these for practitioners’ benefit, please reach out: I’m sure readers would appreciate other perspectives than mine.