{"id":1585,"date":"2012-10-15T11:12:17","date_gmt":"2012-10-15T11:12:17","guid":{"rendered":"http:\/\/blog.soton.ac.uk\/comp6044\/?p=1585"},"modified":"2012-10-15T15:58:36","modified_gmt":"2012-10-15T15:58:36","slug":"ai-notes-to-week-1","status":"publish","type":"post","link":"https:\/\/blog.soton.ac.uk\/comp6044\/2012\/10\/15\/ai-notes-to-week-1\/","title":{"rendered":"AI: Notes to week 1"},"content":{"rendered":"<p>Artificial Intelligence from the point of view of philosophy and compsci: Initial Reading\/Findings<br \/>\nRen\u00e9 Descartes &#8211; Discourse on Method and the Meditations<br \/>\nComputer Science: An Overview 11th Edition &#8211; Glenn Brookshear<br \/>\nPhilosophy and Computing: An Introduction &#8211; Luciano Floridi<\/p>\n<p>Started off by reading Brookshear, which was pretty clear and basic. Also looked into the Descartes, which is fairly basic philosophy and might be a little too general, but has some good points about reason and the mind. Philosophy and computing has a chapter on AI (hard and soft), and is more advanced\/specified.<\/p>\n<p>Notes on Brookshear &#8211;<br \/>\nSo you get an agent, which needs to respond to environmental stimulus. Some of this is easier than other to programme, and how much of it actually indicates \u2018intelligence?\u2019 Like a plant grows towards light as a response to stimulus but that hardly makes it intelligent or aware. That said, human behaviour could also be a collection of stimulus responses that have evolved (respond correctly = survive to reproduce (1) respond incorrectly = die (0))<\/p>\n<p>The Turing Test has, by now, pretty much been passed. What does this indicate?<\/p>\n<p>There are some things which computers find really hard to create an appropriate response to; things which are super easy for humans, for example interpreting visual information and also double meanings in sentences. There are various ways to try to get around this, such as semantics webs constructing context in order to generate appropriate \u2018understanding\u2019.<\/p>\n<p>Some people argue that computers will never be properly intelligent in the way that humans are, but others argue that the brain is just lots of different components performing different tasks, which a computer kinda is.<\/p>\n<p>Also Strong AI and Weak AI are different. Should probably concentrate on just one as I only got 2500 words here.<\/p>\n<p>It\u2019s hard to get agents to reason. You can give them a goal though.<\/p>\n<p>Inference Rules allow new statements to be made from old ones p475<\/p>\n<p>And then there\u2019s Heuristics (getting something\/someone to learn for itself\/themself)<\/p>\n<p>\u201cAnother approach to developing better knowledge extraction systems has been to insert various forms of reasoning into the extraction process, resulting in what is called meta-reasoning &#8211; meaning reasoning about reasoning. An example, originally used in the context of database searches, is to apply the closed-world assumption, which is the assumption that a statement is false unless it can be explicitly derived from the information available.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial Intelligence from the point of view of philosophy and compsci: Initial Reading\/Findings Ren\u00e9 Descartes &#8211; Discourse on Method and the Meditations Computer Science: An Overview 11th Edition &#8211; Glenn Brookshear Philosophy and Computing: An Introduction &#8211; Luciano Floridi Started off by reading Brookshear, which was pretty clear and basic. Also looked into the Descartes, [&hellip;]<\/p>\n","protected":false},"author":61518,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-1585","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/blog.soton.ac.uk\/comp6044\/wp-json\/wp\/v2\/posts\/1585","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.soton.ac.uk\/comp6044\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.soton.ac.uk\/comp6044\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.soton.ac.uk\/comp6044\/wp-json\/wp\/v2\/users\/61518"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.soton.ac.uk\/comp6044\/wp-json\/wp\/v2\/comments?post=1585"}],"version-history":[{"count":2,"href":"https:\/\/blog.soton.ac.uk\/comp6044\/wp-json\/wp\/v2\/posts\/1585\/revisions"}],"predecessor-version":[{"id":1593,"href":"https:\/\/blog.soton.ac.uk\/comp6044\/wp-json\/wp\/v2\/posts\/1585\/revisions\/1593"}],"wp:attachment":[{"href":"https:\/\/blog.soton.ac.uk\/comp6044\/wp-json\/wp\/v2\/media?parent=1585"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.soton.ac.uk\/comp6044\/wp-json\/wp\/v2\/categories?post=1585"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.soton.ac.uk\/comp6044\/wp-json\/wp\/v2\/tags?post=1585"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}