Back to All Blogs

Building an AI Agent and a Growth Mindset: My Summer Internship at fivestar*

Kicking Off with a Challenge

I started my summer at fivestar* with a challenge that felt a little ambitious: build an AI agent that could help people do their jobs. The idea was simple: let project managers ask questions about our legacy work and get real, useful answers. The execution was anything but.

 

Using Azure AI Foundry, I stood up an AI agent and gave it access to our project history. The big unlock was connecting SharePoint documents with retrieval‑augmented generation (RAG), so the agent could pull in the right context on demand. Watching it go from a generic model to something that could reference real documents and answer project‑specific questions was a bit surreal. I spent a lot of time in the Foundry’s sandbox, comparing models and tuning prompts until the responses felt grounded and helpful.

 

The DevOps Experiment

The trickier part was giving the agent access to Azure DevOps. I tried two paths. First, I attempted to host the official ADO MCP server in the cloud, but it currently lacks HTTP support. The silver lining was that AI itself suggested a path to add that support, our likely next step. I also experimented with an Azure preview that exposes an existing API as an MCP server. That one fell through due to tier limitations and cost. Not exactly a clean win, but the agent is close to production-ready, and I’m confident it’ll become a real tool for our PMs, surfacing patterns, decisions, and insights we’ve made across past projects.

 

In parallel, I got to scope for my first client project for the Pitt School of Nursing. With guidance from CJ on our dev team, I broke down a small but practical automation: watch a Windows folder for new files, extract contact data, and use the Qualtrics API to email surveys. I researched the tooling, mapped out what the API could and couldn’t do, wrote the scope, and met with the client to clarify edge cases. It was a great introduction to scoping something real, small enough to grasp but detailed enough to matter.

 

From Testing Tools to Zephyr

Midway through the summer, I took a detour into testing. I explored Testomat.io to help us get more value from our automated tests. I wrote basic Playwright scripts in both C# and TypeScript, wired them into an Azure DevOps pipeline, and pushed results to the cloud using Testomat’s reporting library. I introduced the tool to our QA Manager, Swapna, and began folding it into a client workflow until the client switched to Zephyr. In consulting, apparently, that’s part of the game. Fortunately, that pivot opened its own set of problems to solve.

 

Zephyr became a data migration exercise for me. I exported a batch of Jira work items, then wrote a script to reshape their formatting so Zephyr would accept them. After a few rounds of refinement, I had everything loaded: all our existing QA cases, intact and linked to their user stories. No tedious manual recreation is needed. It was one of those unglamorous tasks that quietly saves a team hours.

 

Joining the QA Effort

As my internship was wrapping up, I joined the QA effort for our National Retail Organization project. I sat in on daily standups, wrote deeper end‑to‑end Playwright tests, and did some hands‑on manual testing as the first pieces of the app came together. I also wired test results back to Zephyr and used AI to enrich the error data we submit, turning “it failed” into “here’s what failed, why it likely failed, and what to check next.” Small improvements like that compound over time, especially for QA and dev handoffs.

 

If there was a theme for my summer, it would be working alongside AI rather than just working with tools. Using Cursor felt like having a patient, always‑available pair programmer. The newest models helped me ramp up quickly, fill in gaps, and stay unblocked without interrupting teammates every hour. I still had to read, think, and make decisions, but the feedback loop was so much faster. It changed how I approach problems and made it obvious that being fluent with AI isn’t optional for the kind of work I want to do. It will be a core part of my future job.

 

Reflections on Growth & Next Steps

I didn’t ship everything I wanted to. Not every approach panned out. But I left with systems running, migrations completed, tests integrated, and an AI agent that’s dangerously close to being a daily tool for project managers. More importantly, I learned how to scope, experiment, pivot, and keep moving. If that’s the future of work, I’m excited to be part of it.

 

Lastly, I'd like to thank everyone for helping me navigate this learning opportunity, and I really enjoyed the time spent with each of you. I'll be sure to use what I've learned wherever I land in the future, and if any of you ever need to contact me, be sure to reach out!  

October 15, 2025