Isshin

ROLE
Combat System Programmer
QA Director
DESCRIPTION
ISSHIN is a Third-Person Combat game focused on embracing death, fighting on, and mastering the way.
Isshin was developed in DigiPen's GAM 300/350 class as part of an interdisciplinary team. I implemented Combat Systems and Tools while also serving as the QA Director.
YEAR
2023-24
GENRE
Melee Combat
LANGUAGE
C++, Blueprints, Batch
TOOLS
PLATFORM
Windows
Jenkins CI/CD
Visual Studio
UE 5.2
Batch


Combat Tools
Cancel Notify
Combat Debugger
This inspector allows the Combat Designer to customize the behavior of each "Cancel Window". For example, if the current combat frame is within the "Cancel Window" and a new action is requested, this map will be evaluated for a result.

Cancel Notify instance Inspector
The designer is able to place many "Cancel Notifies" in the action timeline making the combat easy to produce and iterate.
The Combat Debugger displays some key data for both the Custom Input Buffer and the Custom Action Manager. This made the largest impact during the first 2 months of the project when animations were first being worked into the game. More data was easy to add as needed, this is because it is an override of Unreal 5's Gameplay Debugger.

Gameplay Debugger at the begining


Gameplay Debugger at the end of the project
Cancel Notifys in the action timeline
Collision Notify
There is a Notify for both "Hit" and "Hurt" Boxes. A separate manager kept track of them to verify destruction and prevent duplicates. This allowed for support of multiple "Hit" and "Hurt" boxes at any given time. As for usability, each were used the same way in the montage editor.

Collision Inspection in the Unreal 5 Montage Editor
QA Processes
Intro
The basic requirement for QA in the course tied to the project was an Excel sheet that required updates every week with new tests and was seen as tedious. My producer and I's goal was to streamline the process and bring QA closer to the development of the game. Our choice was to keep as mush development tracking in one place.

Feature Lifecycle
The lifecycle for a feature is very straight forward and is a simple adaptation of the standard Agile development cycle.
"Needs Review"
When a feature is "complete" it goes to a short review stage where its title is re-written to a "Testable statement" a designer or engineer then makes sure nothing is obviously wrong, and it gets sent to the "Complete" stage
"Complete"
At the end of the week all completes are checked for a "testable statement" and moved to a QA List where the have a "New Feature" Tag appended to the tags. It also joins the recurring tests and fixed bugs to make up a comprehensive but easy to test, list of tests. Once tested for the first time I filter the feature into a archive for regression testing down the road or keep it as recurring depending on the feature's importance
Bug Tracking
When a bug is found, it gets reported to a list in ClickUp
How?
Whoever finds the bug creates a task under the "Triage" stage with a brief title explaining the bug. An automation then fills the task with a form for the reporter to fill out.
The Report
The report asks for some basic info, Revision number or Build number if it was caught in a compiled build, steps to reproduce the bug, any video or picture evidence, any logs, and a known cause or suggested solution.
Triage
When an engineer has some free time or just wants to switch it up they would go to the bug triage and sort the newly created bugs. A bug ends up in 1 of 4 categories, "To Do", "Incomplete Report", "Wont Fix", and "Cant Reproduce". Bugs that are moved to "To Do" follow the same lifecycle as features. During weekly leads meetings I would bring up bugs in the other 3 categories and clean up the list.

Weekly Testing
The intention with weekly testing is to get as much of the team playing the game and to get a status on required functionality. I don't believe that a test failing is strictly bad. This is because we know its broken, usually weeks before its a point of stress. The quick rundown on procedure is I assign groups of test cases to people available for testing while a build is running on the Build Server. When the build is complete I instruct everyone to download and open the most recent build. Then each case is worked through and marked as 'Failed', 'Partial', 'Passed', or 'Not Testable' these results are submitted to the professors and reviewed by the team at the beginning of the next week to write bugs or tasks written for fixes.
Milestone Testing
Milestone testing is a complete, full team, regression test. The workflow is the same as weekly testing, but with many additional cases. Including specific cases for milestone submission and cases for minimum requirements for expected work completed for each section of the project rubric.
Development Support
Automated Build Server
Lightmap Render Farm
After presenting to professors our modified QA Test Sheet, I was told to check out Jenkins CI/CD. Later that week I deployed Jenkins on an un-used desktop. It started with a basic nightly build that zipped up the result from the "Unreal Build Tool". As time allowed I added C++ compilation verification, Discord Messages, and automatic Inno Installer compilation, etc. While it's initial goal was to make it easier for designers to playtest builds, it's biggest impact was through debugging and distributing builds to teammates during the weekly QA Tests. Not only did it allow us to test more cases, but saved us countless hours of pulling different revisions from source control. Click the image below for a complete write up!
The Server also ran a "Swarm Coordinator" that managed a render farm. This farm was made up of team members laptops that "donated" CPU cores to the farm. This decreased our Lighting Artist's lightmap build times 5X with room for a quality increase. This was non invasive as not every CPU core on a team members laptop will be in use. Below is a example of ~50 CPU cores hard at work.

~50 cores from 5 PCs at work building light maps for our lighting artist
Link to TOP DevOps WriteUp