I am a Ph.D. candidate in Computer Science at George Mason University (GMU), specializing in applying machine learning, particularly Large Language Models (LLMs), to enhance software engineering workflows. My research focuses on developing tools that improve developer productivity, UI comprehension, and accessibility. My research is conducted under the guidance of Dr. Kevin Moran in the SAGE Lab. One of my key projects involves exploring in-context program repair using LLMs, automating code fixes with minimal developer intervention. I’m also working on task-based generation and fine-tuning techniques with diffusion models to optimize LLMs for specific tasks. I also gained valuable industry experience as a research intern at Microsoft, where I developed LLM-powered solutions to automate and optimize developer workflows, significantly improving productivity.
Microsoft Research Internship
Submitted FRAME to an A* SE Conference
Published MOTOREASE at ICSE 2024
Published MOTOREASE at ICSE 2024
Started work on a systemic literature review
Started my PhD at GMU
Completed my Master's Degree at GMU
Worked as a research and design intern at Alcon
Started my Master's Degree at GMU
Completed my Bachelor's Degree at GMU
Worked as a software engineering intern at ISSI
Fall 2020
Fall 2021
Spring 2025
- Mentored by Dr. Christian Bird, Dr. Nicole Forsgren, and Dr. Rob Deline, I identified bottlenecks in the software deployment and build process, leveraging Machine Learning and Artificial Intelligence to automate and streamline workflows for developers.
- Collaborated with developers to gather requirements, designing and building a user interface that integrates K-means clustering on build failures using Azure OpenAI embeddings. This groups failures for easy access and triage by on-call developers. Built with Python, Node.js, Flask, React, and supported by an Azure Kusto Database.
- Deployed custom Large Language Models (LLMs) within an information-secure Azure environment (OpenAi GPT-4o) to proactively tackle explainability, and traceability, significantly reducing manual inspection and fatigue.
- Extracted a set of generalizable design rules to gamify and redesign UI workflows, empowering teams to leverage AI for automating repetitive tasks, enhancing productivity, and minimizing manual efforts.
- Presented this project at an executive review with a Corporate Vice President (CVP), where the partner product team requested an immediate push to production and initiated a successful tech transfer due to its potential to improve developer efficiency. I aim to publish results demonstrating productivity increases for developers.
- Identified the drought of developer facing tools that leverage Machine Learning techniques to make accessible Android applications
- Designed and implemented MotorEase, an automated tool to detect motor-impairment accessibility issues in mobile applications using Java and Python programming languages.
- Integrated state-of-the-art techniques in PyTorch computer vision, pattern-matching, and static analysis to detect various accessibility violations through application screenshots and XML data.
- Designed and implemented SearchAccess, a developer-facing search engine which facilitates the search of accessible User Interface Screens using Node.js and Flask backend and Python and React frontend.
- Designed a search functionality using CLIP Embeddings and Solr search indexing in conjunction with a MongoDB database and AWS S3 image storage server to perform a search of accessible Android UIs.
- Worked on a Multi-Disciplinary team with researchers and surgeons that worked to prototype a surgical voice assistant
- Designed and implemented a wake-word for surgical voice assistants using Tensorflow, Sagemaker, S3, and current research in voice assistants after consulting with surgeons and hospitals about requirements.
- Used Python, Librosa, PyAudio, and PyTorch to parse and classify windowed audio to detect the wake-word.
- Achieved 80\% accuracy on wake-word detection prototype in an input stream which exceeded expectations and is now in operation room devices across the US
- Developed a project to increase ease of communication between doctors and patients at hospitals by tracking calls, requirements, and patient to doctor communication
- Built a series of REST APIs using Node Js back-end, React front-end, and MongoDB database
- Lead weekly SCRUM meetings with offshore teams in development and integration into production
Python 98%
Java 96%
Keras/Tensofrlow 90%
PyTorch 85%
React 85%
Node 85%
Hadoop 80%
AWS Sagemaker/EC2 92%
Apache Spark 85%
Photoshop 95%
Docker/Kubernetes 75%
C & C++ 85%
If you've made it this far, let's talk and get things rolling!