Do you use Large Language Models (e.g., ChatGPT, Hugging Face models, or in-house LLMs) in your projects? We would love to interview you about your experience!
We are researchers from the Wolfpack Security and Privacy Research Lab at North Carolina State University interested in how developers and practitioners select, integrate, and secure LLMs in software systems. We are also part of the Secure Software Supply Chain Center, a multi-institution research initiative aimed at securing the modern software supply chain.
Interview Participation
We are looking for professionals with experience integrating LLMs into software systems, or who are part of organizations using LLMs. We are interested in your practical experiences, opinions, and challenges related to using LLMs. The interview will take no longer than 60 minutes including a compensation of $60.
The interview would be
- Data usage: Fully anonymous, at most, short anonymized quotes from the interview may be published.
- Estimated time: About 45–60 minutes of your valuable time.
- Scheduling an interview: Flexible scheduling, reach out to us via email if you have any questions!
📅 Book an interview with us or 📧 email anytime with questions, we’d love to hear from you!
- Email Contact: Mahzabin Tamanna [email protected]
About this Study
This study investigates how practitioners decide to adopt (or avoid) LLMs in their projects and what factors, challenges, and safeguards shape these decisions. Our goal is to better understand the role of LLMs in software development workflows and their implications for software supply chain security.
Motivation
LLMs are increasingly embedded into software systems for coding, testing, deployment, and application integration. However, little is known about how practitioners actually make decisions about using them, what challenges or risks they face, and how they try to keep their projects secure. By capturing these perspectives, we aim to uncover how LLMs are being evaluated, fit into supply chain security considerations and where better guidance or support may be needed.
Research Questions
We aim to answer following research questions:
- What considerations influence practitioners’ decisions to adopt or reject LLMs?
- What challenges arise when integrating LLMs into software systems?
- What safeguards are used to mitigate security risks in LLM integration?
Some sample interview questions:
- What factors did you consider when selecting an LLM for your project?
- Have you ever rejected an LLM due to performance, cost, or security concerns?
- How do you handle risks such as model deprecation or data leakage?
- What safeguards (technical or organizational) are in place for LLM use?
Data Handling
We value and appreciate your contribution and are committed to maintaining your privacy and confidentiality. To protect your confidentiality and privacy :
- Interview recordings will be destroyed after transcription.
- Anonymized transcripts will be destroyed after project completion (likely within few months).
- We will only use short quotes from the interviews in our publication with your approval, and will make sure that you cannot be identified from our reporting.
- Only the small research team will access the data, under an approved IRB process.
Researchers
- Mahzabin Tamanna: PhD Student (North Carolina State University)
- Elizabeth Lin: PhD Student (North Carolina State University)
- Dominik Wermke: Assistant Professor (North Carolina State University)
- Laurie Williams: Full Professor (North Carolina State University)