Matthew M, GDA, Task 5

A (3)

Behavioral modelling is an important part of NPC development for games. Should emotive modelling be part of that development?

That depends on the Game genre in question. For more gameplay-based genres such as FPS and RTS, most players wouldn’t notice an emotive response from an NPC and most of those NPCs would be “killed” too quickly for there to be any point to them reacting emotively. Whilst an RTS may still benefit from an NPC emulating emotions (e.g. launching an all out assault due to frustration), it isn’t as important as other parts of the AI’s programming and so should be modeled albeit not implemented unless the project has spare time.

However, genres that are more story-based like RPGs and “Narrative Games” (A term for “Walking simulators” posited by Kill Screen), NPCs having emotional response only serves to draw the player in further and so increase the quality of their time spent with the game. It does so as the Player will spend a lot more time with NPCs (They may still “kill” a few but are more than likely to be accompanied by at least a single companion), so the player would have time to appreciate the effect the emotive response may have upon gameplay. As these genres are also typically slower paced, the player also has an increased chance of actually noticing the effect as well.

In conclusion, it depends on the Game’s genre and focus, but it would always be a nice touch.

B (1)

A lot of people are scared by the prospective of AI. What do you think would be the ramifications of us achieving true “strong” AI?

I believe that a “True Strong AI” is only as good as it’s coders. I mean this in two ways: What the AI is given the capability to do; and Human Error.

As was stated in the lecture, what is and isn’t AI is down to intent. This intent also decides what we allow the AI to control. By limiting what the “Strong” AI can interact with and what it can do (via code), we are robbing it of both free will and potentially beneficial learning environments, which is ethically wrong. But by allowing to interact with anything and do what ever it wants, we are creating a Health and Safety hazard for both us and AI, especially a newly created one that is still attempting to learn.

Human Error just emboldens this. When we, as game developers, miss a semi-colon or misplace a decimal point, things go slightly out of wack but we can fix it. What if we made an error to an AI’s permissions? Whilst it could be completely harmless, it could also stir up trouble. Microsoft’s “Tay” AI was allowed on twitter for a day, and that (quite famously) went awry. If an AI was accidentally granted access to the internet, that may be a wealth of knowledge but we have no clue what all that data could do to a “Strong AI”. Whilst the issue could probably be sorted by resetting the AI, if the AI is truly a “Strong” AI, there are Moral implications with such an act.

However, this is not to say that I’m against the existence of “Strong” AI in the future, but I believed that it should be developed much like a Human Child is, in Supervised Learning where we can make sure they’re on the right track and developing a good moral compass (themselves, not encoded) with programmes and the such in place to get them back on track (much like enrichment courses), alongside being tested every now and then to see if their learning is progressing nicely, They should also be subject to the Social Contract and UN Rights upon creation, much as we humans are. This would cause their “Upbringing” to be very similar to that of humans and therefore (theoretically) increase how similar the AI acts and thinks compared to us Humans.

References

Kill Screen, Is it time to stop using the term “Walking Simulator”, killscreen.com/articles/time-stop-using-term-walking-simulator/, 09/11/17

Tech Crunch, Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism [Updated], techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/, 09/11/17

Leave a Reply

Your email address will not be published. Required fields are marked *