Repository logo
 

Get on the train or be left on the station: using LLMs for software engineering research

dc.contributor.authorTrinkenreich, Bianca, author
dc.contributor.authorCalefato, Fabio, author
dc.contributor.authorHanssen, Geir, author
dc.contributor.authorBlincoe, Kelly, author
dc.contributor.authorKalinowski, Marcos, author
dc.contributor.authorPezzè, Mauro, author
dc.contributor.authorTell, Paolo, author
dc.contributor.authorStorey, Margaret-Anne, author
dc.contributor.authorACM, publisher
dc.date.accessioned2025-09-25T18:38:58Z
dc.date.available2025-09-25T18:38:58Z
dc.date.issued2025-07-28
dc.description.abstractThe adoption of Large Language Models (LLMs) is not only transforming software engineering (SE) practice but is also poised to fundamentally disrupt how research is conducted in the field. While perspectives on this transformation range from viewing LLMs as mere productivity tools to considering them revolutionary forces, we argue that the SE research community must proactively engage with and shape the integration of LLMs into research practices, emphasizing human agency in this transformation. As LLMs rapidly become integral to SE research—both as tools that support investigations and as subjects of study—a human-centric perspective is essential. Ensuring human oversight and interpretability is necessary for upholding scientific rigor, fostering ethical responsibility, and driving advancements in the field. Drawing from discussions at the 2nd Copenhagen Symposium on Human-Centered AI in SE, this position paper employs McLuhan's Tetrad of Media Laws to analyze the impact of LLMs on SE research. Through this theoretical lens, we examine how LLMs enhance research capabilities through accelerated ideation and automated processes, make some traditional research practices obsolete, retrieve valuable aspects of historical research approaches, and risk reversal effects when taken to extremes. Our analysis reveals opportunities for innovation and potential pitfalls that require careful consideration. We conclude with a call to action for the SE research community to proactively harness the benefits of LLMs while developing frameworks and guidelines to mitigate their risks, to ensure continued rigor and impact of research in an AI-augmented future.
dc.format.mediumborn digital
dc.format.mediumarticles
dc.identifier.bibliographicCitationBianca Trinkenreich, Fabio Calefato, Geir Hanssen, Kelly Blincoe, Marcos Kalinowski, Mauro Pezzè, Paolo Tell, and Margaret-Anne Storey. 2025. Get on the Train or be Left on the Station: Using LLMs for Software Engineering Research. In 33rd ACM International Conference on the Foundations of Software Engineering (FSE Companion '25), June 23-28, 2025, Trondheim, Norway. ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/3696630.3731666
dc.identifier.doihttps://doi.org/10.1145/3696630.3731666
dc.identifier.urihttps://hdl.handle.net/10217/242030
dc.languageEnglish
dc.language.isoeng
dc.publisherColorado State University. Libraries
dc.relation.ispartofPublications
dc.relation.ispartofACM DL Digital Library
dc.rights©Bianca Trinkenreich, et al. ACM 2025. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in FSE Companion '25, https://dx.doi.org/10.1145/3696630.3731666.
dc.subjectgenerative AI
dc.subjectLLM
dc.subjectAI4SE
dc.subjectMcLuhan's Tetrad
dc.titleGet on the train or be left on the station: using LLMs for software engineering research
dc.typeText

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
FACF_ACMOA_3696630.3731666.pdf
Size:
505.75 KB
Format:
Adobe Portable Document Format

Collections