Hard Light Productions Forums
Modding, Mission Design, and Coding => FS2 Open Coding - The Source Code Project (SCP) => Topic started by: Echelon9 on January 19, 2014, 12:28:38 am
-
FS2Open's AI is one area of the code base which has had many different authors over a number of years. Correspondingly, the AI code is notable for different coding styles, and many occasions of copy-paste coding.
The problems stemming from this should be readily apparent.
With the aim of making small steps to resolve this, I have created a branch on Github for a rationalisation of the AI code (https://github.com/Echelon9/fs2open.github.com/compare/ai-code-rationalise). For example, there are plenty of locations where a series of variables should be set concurrently. However, say, one or two locations presently miss one variable -- perhaps because they were copy-pasted at different times or subsequently one certain locations were edited.
Given the subtleties of bugs introduced into the AI code, I would greatly appreciate testers' assistance.
I'm being assisted by some duplicate code checkers.
The steps identified are:
- Rationalise the eight locations where AI goals are reset, into one utility macro or function (Code ready for review)
- Rationalise guarding (Todo)
- ...
-
This is probably a great idea but I'm completely terrified.
-
I'm all for better AI.
How do I test?
Bh
-
This is probably a great idea but I'm completely terrified.
Yes.
Some time ago, I had this high speed gliding AI completely nailed down. And then suddenly there was this AI change made in the code for Diaspora, that made my AI almost completely toothless. It would suddenly spend most of its time doing half assed maneuvers that it wouldn't fully complete, making it just a sitting duck.
Gotta make sure that every change and fix is tested extensively to avoid already released campaigns from suddenly going limp!
-
{SNIP} to avoid {} from suddenly going limp!
Well that would be awkward...
But yeah. I'm all for more robust AI stuff.. but it will definitely need to be seriously tested.. possibly even an entire release cycle dedicated to it (given how little people test special builds, sometimes).
-
Yes, I totally get the concern around testing AI changes. Working through the best way of getting testing builds out there, as code alone is going to be widely enough tested.