Ai Tools

How I Used Claude to Pass My Capstone Project Review at NSUT

How I used Claude AI to pass my NSUT capstone project review — technical specs, API docs, viva prep, and the exact prompts that worked.

Anshul Goyal19 min read
#claude ai#capstone project#nsut#technical documentation#ai tools#cs students#project review#technical writing#viva preparation
NSUT CS student using Claude AI to write technical specification and API documentation for capstone project review

For those of you who are CS students and have a capstone project review coming up but don’t have their documentation in order, here’s the actual process that I used when working at NSUT that was praised by the board – utilizing Claude AI in my process.

While Claude is a powerhouse for documentation, it is just one part of the 25 best AI tools for CS students that every developer should have in their stack.

Why Capstone Project Review Caught Me Off Guard

The third year of studies at NSUT concludes with an assessment of the final year project, which carries a weightage higher than what most students anticipate until just two weeks before the event. All along, you have been working on something – my project consisted of a distributed task queue using Django on the server-side, Redis for the broker and React.js for the dashboard – and then, all of a sudden, you have to present it, defend it and submit professional-looking documentation.

I was not ready for that last part.

It worked. The system behaved as intended: tasks got enqueued, workers processed them, results were saved, and a dashboard displayed live updates on the task status. However, the documentation available to me included only a bare minimum of a simple README file and some ad-hoc comments added in a sleep-deprived state, which I wouldn't wish on anyone else to read. My project presentation consisted of an unfinished Word document, full of "TODO: write explanation of this part." And my deadline was in nine days.

I will not hide anything from you regarding the feelings that come with working on an assignment deadline of nine days while still attending classes and doing other homework assignments, while at the same time carrying with me the constant feeling that even though I have been working on my assignment for several months now, there is still a possibility that my work is not yet good enough to be explained to a committee made up of teachers who have gone through hundreds of such assignments previously.

This is the story of how Claude AI helped me go through this. What input I gave it, what output it produced, where it failed, and what I still needed to do on my own.


What NSUT Capstone Review Actually Evaluates

Before diving into the workflow itself, it would be useful to establish what kind of a document the review panel is actually expecting – as "documentation" is not always quite the same thing.

According to NSUT's policy, the capstone review evaluates three aspects of student work equally. Namely, technical merit of the project, ability to discuss one's design decisions under questioning, and finally, documentation provided with the submission. This last aspect is the weakest link for most students. Not that their projects weren't technically strong; rather, they were too good to explain properly using only text and diagrams.

Typically, documentation package should include a project report featuring system architecture, technical specification, API documentation if necessary, user guide and presentation slides. Claude helped me put all of those together from my fragmented documents and incomplete research.

One more aspect worth mentioning before proceeding is the actual scope of panels' questions. While they would certainly ask about technical implementation of your ideas, they will be mostly interested in making sure you understand everything about your system. For instance, an impressive yet poorly thought-out piece of engineering will likely receive worse evaluation compared to a more simplistic yet meticulously planned project.

AI ModelContext WindowDoc QualityTechnical Logic
TOP PICKClaude 3.5 Sonnet200K (Best for long reports)⭐ 5/5⭐ 4.9/5
GPT-4o128K (Great for short snippets)⭐ 4/5⭐ 4.7/5
Gemini 1.5 Pro2M (Best for massive codebases)⭐ 4.5/5⭐ 4.5/5
PerplexityLive Web (Best for citations)⭐ 3.5/5⭐ 4/5

Phase 1: Using Claude to Write a Technical Specification

I had a terrible architecture page. It consisted of my Notion page with bullets, an initial drawing I did in Excalidraw, some Discord messages sent to my partner where I explained design decision-making in non-technical terms, plus some code comments where I explained particular choices.

So, I just threw everything I had — all of those bullet points, text translation of my Excalidraw, and the code comments related to this issue — at Claude, along with a carefully crafted prompt requesting him to come up with a formal technical specification. One thing was especially important for me regarding the request: it was crucially important to ask him to mark any missing parts of the architecture by [NEEDS DETAIL], not invent some technical details that I didn't mention.

And this instruction was literally the only reason Claude didn't come up with his own solution for my problem and gave me an absolutely perfect document with empty holes in the architecture where I needed to specify things myself.

The first result came to about 70% of what I need. First, the Problem Statement turned out to be great, as Claude managed to put together a perfect two-paragraph problem statement based on my disjointed notes better than I could myself. Second, the Component Architecture part was correct about those components that I had already clearly explained and had exactly three [NEEDS DETAIL] notes for those components that I explained inadequately, which was exactly what I needed.

Two hours passed before I filled in all those gaps, changed the component descriptions according to their actual behavior, not how I tried to explain it through informal Discord chats, and refined the language wherever it was too verbose.

It took me only half a day to do something that would have otherwise taken me two whole days.

The [NEEDS DETAIL] Prompt Trick

In the case of using Claude for technical documentation, it is advisable to have it highlight the gaps instead of filling them up. Claude has been programmed to be helpful, hence when asked to give information regarding something it does not know about your system, it will give plausible but fake information.


Phase 2: Generating API Documentation With Claude

Six REST endpoints were in my task queue. I had access to a Postman test collection, which included information about what should be sent to and received from each of the endpoints. However, there was no written documentation about input/output and errors for the endpoints, which is precisely the type of task for which Claude performs very well – structure, repetition, and consistency being key aspects of such a task.

To provide Claude with necessary information about all the endpoints and allow it to create Stripe-like API reference pages for them, I uploaded a JSON export of the Postman collection along with my Django view files for each of the endpoints, and then asked it to create API reference pages according to the following criteria:

It was quite good work. First of all, six endpoints described and well-formatted. The descriptions were true since I provided Claude with the Django view implementations for each of those, rather than endpoint names or descriptions. If a parameter had a default value or could only take certain values according to validation, Claude picked up on that properly.

All the fields were double-checked against my implementation before they appeared in the documentation. There were two errors in describing the behavior corresponding to particular error codes: Claude inferred that from the code but that was not exactly what happened in my error handling system. Other than that everything was fine enough to submit as is.

Time spent writing and reviewing the document: forty-five minutes. If done manually, this would take me most of the day.

For a broader look at how AI tools handle different documentation types, my post on best AI tools for writing technical documentation covers the full stack beyond just Claude.


Phase 3: Writing the Project Report Section by Section

Writing the project report was the hardest part of the entire 9-day period. In NSUT’s format, there is an introduction, literature review, methodology, implementation, result and evaluation, and conclusion. The process involves writing according to academic standards, which is not easy, because it is not the way one approaches a topic after working on it for a number of months.

Claude helped me write each segment separately instead of the whole report at once. For every section, I provided the necessary background information and asked him for help with drafting.

Introduction and Problem Statement: I explained the problem I was solving through my task queue — why there needed to be a distributed task queue in the first place, what the other options were, and where my solution came in for the particular case study I had chosen. He took this information and condensed it into three paragraphs that provided an accurate introduction to the project.

Literature Review: And this was the most effective application of Claude AI out of all the applications mentioned in the report. I had some five to six papers I had read thoroughly along with some more that I had just skimmed through for any references. After putting the abstracts and significant parts of those papers in Claude, I instructed it to come up with a synthesized literature review highlighting the background, the previous studies related to it, and the gap that my research would be filling. And it came up with a well-written two-page section that I further refined by adding my perspective and removing one irrelevant reference.

Methodology: I created the methodology section outline on my own, which includes the broad approach I took to designing and implementing the process, and then I had Claude write an expanded paragraph for each bullet point, using the right technical terminology. This was effective because I had come up with the organization and details myself, and Claude was just helping me elaborate on them.

Implementation Details: This section was done almost exclusively by me. It contained too many technical details regarding how my system implemented concurrency, why I designed my keys in Redis in a specific manner, and how the integration between Redis and Django channels worked for my real-time dashboard. I employed Claude in editing mode to help me write this part. I first created a section on my own and then pasted it into Claude’s field to have the assistant point out places where an explanation was needed.

Results and Conclusion: I provided Claude with my benchmark data, including throughput at different levels of load, latency information, and time taken for failure recovery, accompanied by an explanation of what all that means in simple language. He drafted a results section in such a way that it related the findings to the objectives I had set in the introductory part of the report, just what one should do while writing any formal report but often forgets to do because of lack of time.

Pros

  • Transforms messy, jotted down notes into well-structured sections in minutes
  • Maintains consistent tone throughout your whole lengthy paper
  • Combines several scientific articles into sections of literature review
  • Highlights clarity problems in your writing that you don’t even see because you’ve been looking at it too long
  • Links up your results with your aims if both are provided

Cons

  • Will fabricate credible technical specifics unless told not to
  • Implementation-focused passages also demand your expertise and careful scrutiny
  • The academic tone sometimes becomes rigid — you should edit for your own tone
  • Shows no familiarity with your institute’s unique formatting standards or templates
  • Functions most effectively one section at a time — full report generation prompts often lack coherence

Phase 4: Building the Presentation Deck With Claude

I would have to give a fifteen-minute presentation on the problem, the architectural design, a live demonstration, important results, and plans for the future. I had all of that information available to me by now, thanks to my report. Condensing all that information into language that can be displayed on a screen and understood in fifteen minutes, however, is an art in itself.

I provided Claude with my project report and asked it to generate an outline of slides based on it. This meant providing the title of each slide, between three and five bullets of information per slide, and a visual guide (diagrams, graphs, pictures, snippets of code) for each slide. The outline was in need of some rearrangement, as it placed the architecture before the problem statement – the wrong order of things in an audience whose interest must be piqued first by learning about the problem. Nevertheless, it was calibrated perfectly for a fifteen-minute presentation.

Using the generated outline, I crafted the actual slides using Canva. I did not task Claude with writing them, as slide language that sounds formal makes little sense when it is presented verbally.


Phase 5: Using Claude to Prepare for the Viva

One part of this evaluation process was the submission of documentation. The second half, which is what scares most students since you don't have a script or a back-up plan in case things go wrong, is known as the viva. Where a faculty panel poses you certain questions in order to test your knowledge about your project.

In preparing for this part of the assessment, I came up with an innovative solution. I used Claude AI to help me get prepared by providing me with a faculty panel that posed me the 20 most challenging questions, particularly about designing aspects and failure scenarios, even those that wouldn't come to mind if you were the creator of the project.

It produced a very accurate list of questions indeed. Like "Why did you choose Redis over RabbitMQ as the message broker?" Or "How does your system deal with tasks failing during execution and being re-tried? How many retries are guaranteed?" What about the state of tasks that are currently in progress when the Redis server crashes midway through them?" These were some of the exact questions I had secretly hoped wouldn’t be asked of me.

I wrote down an answer to every single one. Not because I intended to go over the answers during the oral exam, but rather because answering questions in writing is an effective way of exposing holes in your logic that mental preparation doesn't always catch.

And then I pasted all those answers back into Claude and asked what flaws there were, and what kind of follow-up question would a clever assessor ask here. Through that kind of stress testing I ended up finding four gaps in my answer that would have been obvious during an actual viva examination.

  1. 1

    Paste your final project report into Claude and ask it to come up with the 20 most probable viva questions that can be asked by the faculty committee on your project, including difficult questions regarding design compromises and failure cases.

  2. 2

    Provide your responses in full sentences for each question in another document. Bullet points are not allowed. Using full sentences makes you realize the areas where you have insufficient knowledge.

  3. 3

    Paste each response in Claude and ask, 'Where is this explanation poor, and what would a sharp critic ask next?' The reply from Claude should reveal your blind spots and misconceptions.

  4. 4

    For each identified gap in your knowledge, go back to your implementation code or concept and master it. Don't ever rely on Claude to give you a viva response that you don't understand; a faculty committee will spot the flaws immediately.

  5. 5

    In the 24 hours leading to your review, read your three worst answers and ask Claude for an alternative way of structuring the same explanation.


The Exact Claude Prompts I Used (Copy These)

All guides recommend to "use AI for documentation," but none explain what you should type. Here are the three prompts that were most effective when I prepared for my capstone project review — modify them to fit your project needs.

For the technical specification:

"You are a technical writer who needs to help a computer science student write a technical specification for his capstone project. Here is my document on my project architecture and design choices: [paste your document]. Please write a formal technical specification with the following sections: Problem statement, System overview, Component architecture, Data flow, Technology stack with reasons for using particular technologies, Known limitations, and Possible future work. Avoid inventing technical details that I haven't provided – use [NEEDS DETAIL] instead.

For API documentation:

"Document each of the above endpoints according to Stripe API Documentation. Include the following: a one-liner on what the endpoint does; HTTP methods/path; how to access; all parameters of the request including name, type, whether it’s required or optional, and an explanation; complete schema/response with explanations of fields; table of errors with causes. All in markdown format. Here is my code: [paste your view code and Postman collection]."

For viva preparation:

"Assume you are a panel of lecturers at an engineering school examining a third year CS capstone project. From the project document provided below, come up with twenty questions that would challenge the knowledge of the student in this project, especially with regard to the design decisions, trade-offs, and failures. Some of the questions should not be easily answered by a student who developed the project without understanding the principles involved. Here is the project document: [paste project] ."

Save These Prompts

The more focused your prompt is, the less post-editing your output will require. Fuzzy prompts result in generic documents. These three prompts are focused since they establish the document structure, target audience, required sections, and exceptional cases – even before Claude starts writing.


Honest Limitations — What Claude Could Not Do

This post would not be worth reading if I did not address the limits honestly.

Claude could not understand my codebase until I explained the logic behind it. The quality and relevance of the output it produced in response depended on the clarity of information I provided. When I asked about vague stuff in generic terms, I was getting generic responses that could apply to pretty much any distributed application. But when I was providing real code and asking detailed questions, I received explanations good enough to show to anyone.

The program was also unable to guess my architectural decisions. In some places of the project report I had to explain why I had selected certain architecture solutions: why I went for the pull model of workers instead of a push one, why I did not introduce priority task queues into the first release, why I preferred storing the results in Redis over writing them back to the PostgreSQL. All those decisions were made for a reason, and Claude would have invented some other explanation for each one of them that would sound plausible but would not have worked with the faculty panel follow-up questions.

The practical details of implementing the report were another matter entirely. Claude does not have any idea about how my worker pool worked in practice or why my Django Channels configuration exhibited that strange behavior in its consumer routing. This was something I knew personally, and it was up to me to document this information in the relevant sections.

Lastly, Claude is unaware of the particular format you are expected to use for your NSUT report. Neither does Claude understand your advisor's preferred style of presentation nor the guidelines for formatting that your department has set out after years of reviewing student reports. Each and every one of the sections produced by Claude has to be reviewed and reformatted based on your department's template.

Your proper role throughout this process should be that of the editor, not the author. Claude is responsible for the difficult task of going from zero to something, while you take care of ensuring that it accurately reflects the truth.


The Result: What Actually Happened at the Review

The review process went quite smoothly. The documentation pack got very constructive feedback from the review panel, with one lecturer noting that the API documentation was particularly thorough for a third-year project, thanks directly to the use of Claude to streamline my workflow.

Some of the viva questions were challenging, like the one asking about configuring Redis persistence under load, which I had discovered during my Claude testing phase. This question posed a particular challenge because it involved the tradeoff between using Redis Database Backup (RDB) and Append-only File (AOF) logging modes, something I had decided against implementing.

None of that was Claude’s doing either. What happened was that the tool let me make use of the knowledge I have accumulated and documented what was needed on time, using my own words and language while having Claude help me write those words down in a professional tone required for the review. That tool does not create understanding. It takes out all the friction between understanding and documenting, friction that for students who work on a project for months and have two weeks left to document everything they’ve done turns their excellent work into poorly documented one.

For all the CS students at NSUT or anyone else with a major project review ahead of them: start documenting early. Let Claude help you translate your knowledge into what reviewers will actually see. Let Claude find the holes in your knowledge, the questions you can’t yet answer, so you can fill those holes before reviewers can spot them themselves.

The difference between a project working well and the same project being recognized by reviewers as working well comes down to communication. And Claude AI is designed specifically to do that for you in 2026.


Frequently Asked Questions

Can I use Claude AI for my college capstone project?+
Yes — Claude is excellent for drafting technical documentation, project reports, system design specs, and presentation content. Most universities allow AI as a productivity aid for documentation. Always check your institution's policy and never submit AI-generated code or analysis as your own original academic work.
How does Claude help with technical project documentation?+
Claude can take your rough notes, architecture diagrams, and code snippets and produce structured technical specifications, API references, and project reports. Its large context window means you can paste entire modules or design notes without losing coherence in the output.
What should I feed Claude to get good project documentation?+
The more context, the better. Paste your system architecture overview, the problem statement, key design decisions, your tech stack, and rough bullet notes on what each component does. Claude structures all of this into readable, formal documentation far faster than writing from scratch.
Can Claude help me prepare for a project viva or review panel?+
Absolutely. Give Claude your project report and ask it to generate the 15 most likely questions a faculty panel would ask, along with model answers. Then use it to stress-test your explanations — paste your answer to a question and ask Claude where your reasoning is weak.
Is using Claude for project documentation considered academic dishonesty?+
Using Claude to draft and structure documentation that you review, edit, and own is generally considered a productivity tool, similar to Grammarly or a writing guide. Using it to generate core technical analysis or code you submit as your own work is a different matter. Always refer to your college's academic integrity policy.

Related Articles