Creative Mission

COMPREHENSIVE CREATIVE CREATIVITY

Our "Creative Mission" is to foster a rich, interdisciplinary dialogue that will convey and forge new tools and applications for creative, critical and philosophical thinking; engaging the world in the process. Through workshops, tutorials and social media platforms we also strive to entertain, educate and empower people - from individuals, to businesses, governments or not-for-profit groups; we aim to guide them in building a base of constructive ideas, skills and a Brain Fit paradigm - thereby setting the stage for a sustainable, healthy, and creative approach and lifestyle . These synthesized strategic "Critical Success Factors" - can then give rise to applied long-term life or business - Operating Living Advantages and Benefits.

And, at the same time, we encourage Charlie Monger's key attitude and belief - for and with all of whom we reach - " develop into a lifelong self-learner through voracious reading; cultivate curiosity and strive to become a little wiser (and more grateful)* everyday."


* CCC Added - Editor

Search This Blog

Thursday 31 January 2019

To Boost Higher-Order #Thinking, Try #Curation



Listen to this post as a podcast:
If no one has ever encouraged, pushed, or insisted that you build more higher-order thinking into your students’ learning, it’s possible you’ve been teaching in a cave.
Higher-level thinking has been a core value of educators for decades. We learned about it in college. We hear about it in PD. We’re even evaluated on whether we’re cultivating it in our classrooms: Charlotte Danielson’s Framework for Teaching, a widely used instrument to measure teacher effectiveness, describes a distinguished teacher as one whose “lesson activities require high-level student thinking” (Domain 3, Component 3c).
All that aside, most teachers would say they want their students to be thinking on higher levels, that if our teaching kept students at the lowest level of Bloom’s Taxonomy—simply recalling information—we wouldn’t be doing a very good job as teachers.
And yet, when it’s time to plan the learning experiences that would have our students operating on higher levels, some of us come up short. We may not have a huge arsenal of ready-to-use, high-level tasks to give our students. Instead, we often default to having students identify and define terms, label things, or answer basic recall questions. It’s what we know. And we have so much content to cover, many of us might feel that there really isn’t time for the higher-level stuff anyway.
If this sounds anything like you, I have a suggestion: Try a curation assignment.

WHAT IS CURATION?

When a museum director curates, she collects artifacts, organizes them into groups, sifts out everything but the most interesting or highest-quality items, and shares those collections with the world. When an editor curates poems for an anthology, he does the same thing.
The process can be applied to all kinds of content: A person could curate a collection of articles, images, videos, audio clips, essays, or a mixture of items that all share some common attribute or theme. When we are presented with a list of the “Top 10” anything or the “Best of” something else, what we’re looking at is a curated list. Those playlists we find on Spotify and Pandora? Curation. “Recommended for You” videos on Netflix? Curation. The news? Yep, it’s curated. In an age where information is ubiquitous and impossible to consume all at once, we rely on the curation skills of others to help us process it all.
In an educational setting, curation has a ton of potential as an academic task. Sure, we’re used to assigning research projects, where students have to gather resources, pull out information, and synthesize that information into a cohesive piece of informational or argumentative writing. This kind of work is challenging and important, and it should remain as a core assignment throughout school, but how often do we make the collection of resources itself a stand-alone assignment?
That’s what I’m proposing we do. Curation projects have the potential to put our students to work at three different levels of Bloom’s Taxonomy:
  • Understand, where we exemplify and classify information
  • Analyze, where we distinguish relevant from irrelevant information and organize it in a way that makes sense
  • Evaluate, where we judge the quality of an item based on a set of criteria
If we go beyond Bloom’s and consider the Framework for 21st Century Learning put out by the Partnership for 21st Century Learning, we’ll see that critical thinking is one of the 4C’s listed as an essential skill for students in the modern age (along with communication, creativity, and collaboration) and a well-designed curation project requires a ton of critical thinking.
So what would a curation project look like?

A SAMPLE CURATION TASK

Suppose you’re teaching U.S. history, and you want students to understand that our constitution is designed to be interpreted by the courts, and that many people interpret it differently. So you create a curation assignment that focuses on the first amendment.
The task: Students must choose ONE of the rights given to us by the first amendment. To illustrate the different ways people interpret that right, students must curate a collection of online articles, images, or videos that represent a range of beliefs about how far that right extends. For each example they include, they must summarize the point of view being presented and include a direct quote where the author or speaker’s biases or beliefs can be inferred.
Here is what one submission might look like, created on a platform called eLink (click here to view the whole thing).
Because they are finding examples of a given concept and doing some summarizing, students in this task are working at the Understand level of Bloom’s. But they are also identifying where the author or speaker is showing bias or purpose, which is on the Analyze level.

MORE PROJECT IDEAS

Ranked Collection: Students collect a set of articles, images, videos, or even whole websites based on a set of criteria (the most “literary” song lyrics of the year, or the world’s weirdest animal adaptations) and rank them in some kind of order, justifying their rankings with a written explanation or even a student-created scoring system. Each student could be tasked with creating their own collection or the whole class could be given a pre-selected collection to rank. This would be followed by a discussion where students could compare and justify their rankings with those of other students. (Bloom’s Level: Evaluate)
Shared Trait Collection: This would house items that have one thing in common. This kind of task would work in so many different subject areas. Students could collect articles where our government’s system of checks and balances are illustrated, images of paintings in the impressionist style, videos that play songs whose titles use metaphors. It could even be used as part of a lesson using the concept attainment strategy, where students develop an understanding of a complex idea by studying “yes” and “no” examples of it. By curating their own examples after studying the concept, they will further developing their understanding of it. (Bloom’s Level: Understand).
Literature Review: As the first step of a research project, students could collect relevant resources and provide a brief summary of each one, explaining how it contributes to the current understanding of their topic. As high school students prepare for college, having a basic understanding of what a literature review is and the purpose it serves—even if they are only doing it with articles written outside of academia—will help them take on the real thing with confidence when that time comes. (Bloom’s Levels: Understand for the summarization, Analyze for the sorting and selecting of relevant material)
Video Playlist: YouTube is bursting at the seams with videos, but how much of it is actually good? Have students take chunks of your content and curate the best videos out there to help other students understand those concepts. In the item’s description, have students explain why they chose it and what other students will get out of it. (Bloom’s Levels: Understand for summarization, Evaluate for judging the quality of the videos)
Museum Exhibit: Task students with curating a digital “exhibit” around a given theme. The more complex the theme, the more challenging the task. For example, they might be asked to assume the role of a museum owner who hates bees, and wants to create a museum exhibit that teaches visitors all about the dangers of bees. This kind of work would help students understand that even institutions that might not own up to any particular bias, like museums, news agencies, or tv stations, will still be influenced by their own biases in how they curate their material. (Bloom’s Level: Understand if it’s just a collection of representative elements, Create if they are truly creating a new “whole” with their collection, such as representing a particular point of view with their choices)
Real World Examples: Take any content you’re teaching (geometry principles, grammar errors, science or social studies concepts) and have students find images or articles that illustrate that concept in the real world. (Bloom’s level: Understand).
Favorites: Have students pull together a personal collection of favorite articles, videos, or other resources for a Genius Hour, advisory, or other more personalized project: A collection of items to cheer you up, stuff to boost your confidence, etc. Although this could easily slide outside the realm of academic work, it would make a nice activity to help students get to know each other at the start of a school year or give them practice with the process of curation before applying it to more content-related topics.

FOR BEST RESULTS, ADD WRITING

Most of the above activities would not be very academically challenging if students merely had to assemble the collection. Adding a thoughtfully designed written component is what will make students do their best thinking in a curation assignment.
The simplest way to do this is to require a written commentary with each item in the collection. Think about those little signs that accompany every item at a museum: Usually when you walk into an exhibit, you find a sign or display that explains the exhibit as a whole, then smaller individual placards that help visitors understand the significance of each piece in the collection. When students put their own collections together, they should do the same thing.
Be specific about what you’d like to see in these short writing pieces, and include those requirements in your rubric. Then go a step further and create a model of your own, so students have a very clear picture of how the final product should look. Because this is a genre they have probably not done any work in before, they will do much better with this kind of scaffolding. Doing the assignment yourself first—a practice I like to call dogfooding—will also help you identify flaws in the assignment that can be tweaked before you hand it over to students.

DIGITAL CURATION TOOLS

It’s certainly possible for students to collect resources through non-digital means, by reading books in the library or curating physical artifacts or objects, but doing a curation project digitally allows for media-rich collections that can be found and assembled in a fraction of the time. And if you have students curating in groups, using digital tools will allow them to collaborate from home without having to meet in person.
Here are a few curation tools that would work beautifully for this kind of project:
  • Elink is the tool featured in the sample project above. Of all the tools suggested here, this one is the simplest. You collect your links, write descriptions, and end up with a single unique web page that you can share with anyone.
  • Pinterest is probably the most popular curation tool out there. If your students are already using Pinterest, or you’re willing to get them started, you could have them create a Pinterest board as a curation assignment.
  • Symbaloo allows users to create “webmixes,” boards of icons that each lead to different URLs. Although it would be possible to create a curated collection with Symbaloo, it doesn’t allow for the same amount of writing that some other tools do, so you would need to have students do their writing on a separate document.
  • Diigo is a good choice for a more text-driven project, like a literature review or a general collection of resources at the beginning stages of a research project, where images aren’t necessarily required. Diigo offers lots of space to take notes about every item in a collection, but it doesn’t have user-friendly supports for images or other media.
  • CURATION PININTEREST





Did you enjoy this post? Sign up for our "Picasso Creative Writing Newsletter" to get the TOP monthly posts, articles, reports and studies, like this.

Disclaimer: The facts and opinions expressed within this article are the personal opinions of the author. PIcasso Creative Writing does not assume any responsibility or liability for the accuracy, completeness, suitability, or validity of any information in this article.

Wednesday 30 January 2019

6 Critical #Thinking #Skills You Need to Master Now


Related image





No matter what walk of life you come from, what industry you’re interested in pursuing or how much experience you’ve already garnered, we’ve all seen firsthand the importance of critical thinking skills. In fact, lacking such skills can truly make or break a person’s career, as the consequences of one’s inability to process and analyze information effectively can be massive.
“The ability to think critically is more important now than it has ever been,” urges Kris Potrafka, founder and CEO of Music Firsthand. “Everything is at risk if we don’t all learn to think more critically.” If people cannot think critically, he explains, they not only lessen their prospects of climbing the ladder in their respective industries, but they also become easily susceptible to things like fraud and manipulation.
With that in mind, you’re likely wondering what you can do to make sure you’re not one of those people. Developing your critical thinking skills is something that takes concentrated work. It can be best to begin by exploring the definition of critical thinking and the skills it includes—once you do, you can then venture toward the crucial question at hand: How can I improve?
This is no easy task, which is why we aimed to help break down the basic elements of critical thinking and offer suggestions on how you can hone your skills and become a better critical thinker.

Related image

What is critical thinking?

Even if you want to be a better critical thinker, it’s hard to improve upon something you can’t define. Critical thinking is the analysis of an issue or situation and the facts, data or evidence related to it. Ideally, critical thinking is to be done objectively—meaning without influence from personal feelings, opinions or biases—and it focuses solely on factual information.
Critical thinking is a skill that allows you to make logical and informed decisions to the best of your ability. For example, a child who has not yet developed such skills might believe the Tooth Fairy left money under their pillow based on stories their parents told them. A critical thinker, however, can quickly conclude that the existence of such a thing is probably unlikely—even if there are a few bucks under their pillow.
Image result for critical thinking

6 Crucial critical thinking skills (and how you can improve them)

While there’s no universal standard for what skills are included in the critical thinking process, we’ve boiled it down to the following six. Focusing on these can put you on the path to becoming an exceptional critical thinker.

1. Identification

The first step in the critical thinking process is to identify the situation or problem as well as the factors that may influence it. Once you have a clear picture of the situation and the people, groups or factors that may be influenced, you can then begin to dive deeper into an issue and its potential solutions.
How to improve: When facing any new situation, question or scenario, stop to take a mental inventory of the state of affairs and ask the following questions:
  • Who is doing what?
  • What seems to be the reason for this happening?
  • What are the end results, and how could they change? 

Image result for diagnostic thinking

2. Research

When comparing arguments about an issue, independent research ability is key. Arguments are meant to be persuasive—that means the facts and figures presented in their favor might be lacking in context or come from questionable sources. The best way to combat this is independent verification; find the source of the information and evaluate.
How to improve: It can be helpful to develop an eye for unsourced claims. Does the person posing the argument offer where they got this information from? If you ask or try to find it yourself and there’s no clear answer, that should be considered a red flag. It’s also important to know that not all sources are equally valid—take the time to learn the difference between popular and scholarly articles.

Image result for critical thinking charlie munger

3. Identifying biases

This skill can be exceedingly difficult, as even the smartest among us can fail to recognize biases. Strong critical thinkers do their best to evaluate information objectively. Think of yourself as a judge in that you want to evaluate the claims of both sides of an argument, but you’ll also need to keep in mind the biases each side may possess.
It is equally important—and arguably more difficult—to learn how to set aside your own personal biases that may cloud your judgement. “Have the courage to debate and argue with your own thoughts and assumptions,” Potrafka encourages. “This is essential for learning to see things from different viewpoints.”
How to improve: “Challenge yourself to identify the evidence that forms your beliefs, and assess whether or not your sources are credible,” offers Ruth Wilson, director of development at Brightmont Academy.
First and foremost, you must be aware that bias exists. When evaluating information or an argument, ask yourself the following:
  • Who does this benefit?
  • Does the source of this information appear to have an agenda?
  • Is the source overlooking, ignoring or leaving out information that doesn’t support its beliefs or claims?
  • Is this source using unnecessary language to sway an audience’s perception of a fact?
Image result for descartes quotes

4. Inference

The ability to infer and draw conclusions based on the information presented to you is another important skill for mastering critical thinking. Information doesn’t always come with a summary that spells out what it means. You’ll often need to assess the information given and draw conclusions based upon raw data.
The ability to infer allows you to extrapolate and discover potential outcomes when assessing a scenario. It is also important to note that not all inferences will be correct. For example, if you read that someone weighs 260 pounds, you might infer they are overweight or unhealthy. Other data points like height and body composition, however, may alter that conclusion.
How to improve: An inference is an educated guess, and your ability to infer correctly can be polished by making a conscious effort to gather as much information as possible before jumping to conclusions. When faced with a new scenario or situation to evaluate, first try skimming for clues—things like headlines, images and prominently featured statistics—and then make a point to ask yourself what you think is going on.

Image result for probabilistic thinking

5. Determining relevance

One of the most challenging parts of thinking critically during a challenging scenario is figuring out what information is the most important for your consideration. In many scenarios, you’ll be presented with information that may seem important, but it may pan out to be only a minor data point to consider.
How to improve: The best way to get better at determining relevance is by establishing a clear direction in what you’re trying to figure out. Are you tasked with finding a solution? Should you be identifying a trend? If you figure out your end goal, you can use this to inform your judgement of what is relevant.
Even with a clear objective, however, it can still be difficult to determine what information is truly relevant. One strategy for combating this is to make a physical list of data points ranked in order of relevance. When you parse it out this way, you’ll likely end up with a list that includes a couple of obviously relevant pieces of information at the top of your list, in addition to some points at the bottom that you can likely disregard. From there, you can narrow your focus on the less clear-cut topics that reside in the middle of your list for further evaluation.
Image result for relevance analysis

6. Curiosity

It’s incredibly easy to sit back and take everything presented to you at face value, but that can also be also a recipe for disaster when faced with a scenario that requires critical thinking. It’s true that we’re all naturally curious—just ask any parent who has faced an onslaught of “Why?” questions from their child. As we get older, it can be easier to get in the habit of keeping that impulse to ask questions at bay. But that’s not a winning approach for critical thinking.
How to improve: While it might seem like a curious mind is just something you’re born with, you can still train yourself to foster that curiosity productively. All it takes is a conscious effort to ask open-ended questions about the things you see in your everyday life, and you can then invest the time to follow up on these questions.
“Being able to ask open-ended questions is an important skill to develop—and bonus points for being able to probe,” Potrafka says.

Image result for warhol factory

Become a better critical thinker

Thinking critically is vital for anyone looking to have a successful college career and a fruitful professional life upon graduation. Your ability to objectively analyze and evaluate complex subjects and situations will always be useful. Unlock your potential by practicing and refining the six critical thinking skills above. 
Most professionals credit their time in college as having been crucial in the development of their critical thinking abilities. If you’re looking to improve your skills in a way that can impact your life and career moving forward, higher education is a fantastic venue through which to achieve that. For some of the surefire signs you’re ready to take the next step in your education, visit our article, “6 Signs You’re Ready to Be a College Student.”


Related image


USE INDEPENDENT THOUGHT





Did you enjoy this post? Sign up for our "Picasso Creative Writing Newsletter" to get the TOP monthly posts, articles, reports and studies, like this.


Disclaimer: The facts and opinions expressed within this article are the personal opinions of the author. Picasso Creative Writing does not assume any responsibility or liability for the accuracy, completeness, suitability, or validity of any information in this article.

Tuesday 29 January 2019

#Stanford - Artificial Intelligence -( #AI )


Image result for artificial intelligence

Artificial intelligence (AI) is the field devoted to building artificial animals (or at least artificial creatures that – in suitable contexts – appear to be animals) and, for many, artificial persons (or at least artificial creatures that – in suitable contexts – appear to be persons).[1] Such goals immediately ensure that AI is a discipline of considerable interest to many philosophers, and this has been confirmed (e.g.) by the energetic attempt, on the part of numerous philosophers, to show that these goals are in fact un/attainable. On the constructive side, many of the core formalisms and techniques used in AI come out of, and are indeed still much used and refined in, philosophy: first-order logic and its extensions; intensional logics suitable for the modeling of doxastic attitudes and deontic reasoning; inductive logic, probability theory, and probabilistic reasoning; practical reasoning and planning, and so on. In light of this, some philosophers conduct AI research and development as philosophy.
In the present entry, the history of AI is briefly recounted, proposed definitions of the field are discussed, and an overview of the field is provided. In addition, both philosophical AI (AI pursued as and out of philosophy) and philosophy of AI are discussed, via examples of both. The entry ends with some de rigueur speculative commentary regarding the future of AI.

1. The History of AI

The field of artificial intelligence (AI) officially started in 1956, launched by a small but now-famous DARPA-sponsored summer conference at Dartmouth College, in Hanover, New Hampshire. (The 50-year celebration of this conference, AI@50, was held in July 2006 at Dartmouth, with five of the original participants making it back.[2] What happened at this historic conference figures in the final section of this entry.) Ten thinkers attended, including John McCarthy (who was working at Dartmouth in 1956), Claude Shannon, Marvin Minsky, Arthur Samuel, Trenchard Moore (apparently the lone note-taker at the original conference), Ray Solomonoff, Oliver Selfridge, Allen Newell, and Herbert Simon. From where we stand now, into the start of the new millennium, the Dartmouth conference is memorable for many reasons, including this pair: one, the term ‘artificial intelligence’ was coined there (and has long been firmly entrenched, despite being disliked by some of the attendees, e.g., Moore); two, Newell and Simon revealed a program – Logic Theorist (LT) – agreed by the attendees (and, indeed, by nearly all those who learned of and about it soon after the conference) to be a remarkable achievement. LT was capable of proving elementary theorems in the propositional calculus.[3][4]

Image result for dartmouth conference 1956

Though the term ‘artificial intelligence’ made its advent at the 1956 conference, certainly the fieldof AI, operationally defined (defined, i.e., as a field constituted by practitioners who think and act in certain ways), was in operation before 1956. For example, in a famous Mind paper of 1950, Alan Turing argues that the question “Can a machine think?” (and here Turing is talking about standard computing machines: machines capable of computing functions from the natural numbers (or pairs, triples, … thereof) to the natural numbers that a Turing machine or equivalent can handle) should be replaced with the question “Can a machine be linguistically indistinguishable from a human?.” 

Image result for turing test in ai

Specifically, he proposes a test, the “Turing Test” (TT) as it’s now known. In the TT, a woman and a computer are sequestered in sealed rooms, and a human judge, in the dark as to which of the two rooms contains which contestant, asks questions by email (actually, by teletype, to use the original term) of the two. If, on the strength of returned answers, the judge can do no better than 50/50 when delivering a verdict as to which room houses which player, we say that the computer in question has passed the TT. Passing in this sense operationalizes linguistic indistinguishability. Later, we shall discuss the role that TT has played, and indeed continues to play, in attempts to define AI. At the moment, though, the point is that in his paper, Turing explicitly lays down the call for building machines that would provide an existence proof of an affirmative answer to his question. The call even includes a suggestion for how such construction should proceed. (He suggests that “child machines” be built, and that these machines could then gradually grow up on their own to learn to communicate in natural language at the level of adult humans. This suggestion has arguably been followed by Rodney Brooks and the philosopher Daniel Dennett (1994) in the Cog Project. In addition, the Spielberg/Kubrick movie A.I. is at least in part a cinematic exploration of Turing’s suggestion.[5]) The TT continues to be at the heart of AI and discussions of its foundations, as confirmed by the appearance of (Moor 2003). In fact, the TT continues to be used to define the field, as in Nilsson’s (1998) position, expressed in his textbook for the field, that AI simply is the field devoted to building an artifact able to negotiate this test. Energy supplied by the dream of engineering a computer that can pass TT, or by controversy surrounding claims that it has alreadybeen passed, is if anything stronger than ever, and the reader has only to do an internet search via the string
turing test passed
to find up-to-the-minute attempts at reaching this dream, and attempts (sometimes made by philosophers) to debunk claims that some such attempt has succeeded.



Returning to the issue of the historical record, even if one bolsters the claim that AI started at the 1956 conference by adding the proviso that ‘artificial intelligence’ refers to a nuts-and-boltsengineering pursuit (in which case Turing’s philosophical discussion, despite calls for a child machine, wouldn’t exactly count as AI per se), one must confront the fact that Turing, and indeed many predecessors, did attempt to build intelligent artifacts. In Turing’s case, such building was surprisingly well-understood before the advent of programmable computers: Turing wrote a program for playing chess before there were computers to run such programs on, by slavishly following the code himself. He did this well before 1950, and long before Newell (1973) gave thought in print to the possibility of a sustained, serious attempt at building a good chess-playing computer.[6]
From the perspective of philosophy, which views the systematic investigation of mechanical intelligence as meaningful and productive separate from the specific logicist formalisms (e.g., first-order logic) and problems (e.g., the Entscheidungsproblem) that gave birth to computer science, neither the 1956 conference, nor Turing’s Mind paper, come close to marking the start of AI. This is easy enough to see. For example, Descartes proposed TT (not the TT by name, of course) long before Turing was born.



[7] Here’s the relevant passage:
If there were machines which bore a resemblance to our body and imitated our actions as far as it was morally possible to do so, we should always have two very certain tests by which to recognise that, for all that, they were not real men. The first is, that they could never use speech or other signs as we do when placing our thoughts on record for the benefit of others. For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if it is touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do. And the second difference is, that although machines can perform certain things as well as or perhaps better than any of us can do, they infallibly fall short in others, by which means we may discover that they did not act from knowledge, but only for the disposition of their organs. For while reason is a universal instrument which can serve for all contingencies, these organs have need of some special adaptation for every particular action. From this it follows that it is morally impossible that there should be sufficient diversity in any machine to allow it to act in all the events of life in the same way as our reason causes us to act. (Descartes 1637, p. 116)
Image result for descartes



At the moment, Descartes is certainly carrying the day.[8] Turing predicted that his test would be passed by 2000, but the fireworks across the globe at the start of the new millennium have long since died down, and the most articulate of computers still can’t meaningfully debate a sharp toddler. Moreover, while in certain focussed areas machines out-perform minds (IBM’s famous Deep Blue prevailed in chess over Gary Kasparov, e.g.; and more recently, AI systems have prevailed in other games, e.g. Jeopardy! and Go, about which more will momentarily be said), minds have a (Cartesian) capacity for cultivating their expertise in virtually any sphere. (If it were announced to Deep Blue, or any current successor, that chess was no longer to be the game of choice, but rather a heretofore unplayed variant of chess, the machine would be trounced by human children of average intelligence having no chess expertise.) AI simply hasn’t managed to creategeneral intelligence; it hasn’t even managed to produce an artifact indicating that eventually it will create such a thing.
But what about IBM Watson’s famous nail-biting victory in the Jeopardy! game-show contest?[9]That certainly seems to be a machine triumph over humans on their “home field,” since Jeopardy!delivers a human-level linguistic challenge ranging across many domains. Indeed, among many AI cognoscenti, Watson’s success is considered to be much more impressive than Deep Blue’s, for numerous reasons. One reason is that while chess is generally considered to be well-understood from the formal-computational perspective (after all, it’s well-known that there exists a perfect strategy for playing chess), in open-domain question-answering (QA), as in any significant natural-language processing task, there is no consensus as to what problem, formally speaking, one is trying to solve. Briefly, question-answering (QA) is what the reader would think it is: one asks a question of a machine, and gets an answer, where the answer has to be produced via some “significant” computational process. (See Strzalkowski & Harabagiu (2006) for an overview of what QA, historically, has been as a field.) A bit more precisely, there is no agreement as to what underlying function, formally speaking, question-answering capability computes. This lack of agreement stems quite naturally from the fact that there is of course no consensus as to what natural languages are, formally speaking.[10] Despite this murkiness, and in the face of an almost universal belief that open-domain question-answering would remain unsolved for a decade or more, Watson decisively beat the two top human Jeopardy! champions on the planet. During the contest, Watson had to answer questions that required not only command of simple factoids (Question1), but also of some amount of rudimentary reasoning (in the form of temporal reasoning) and commonsense (Question2):
Question1: The only two consecutive U.S. presidents with the same first name.
Question2: In May 1898, Portugal celebrated the 400th anniversary of this explorer’s arrival in India.
While Watson is demonstrably better than humans in Jeopardy!-style quizzing (a new human Jeopardy! master could arrive on the scene, but as for chess, AI now assumes that a second round of IBM-level investment would vanquish the new human opponent), this approach does not work for the kind of NLP challenge that Descartes described; that is, Watson can’t converse on the fly. After all, some questions don’t hinge on sophisticated information retrieval and machine learning over pre-existing data, but rather on intricate reasoning right on the spot. Such questions may for instance involve anaphora resolution, which require even deeper degrees of commonsensical understanding of time, space, history, folk psychology, and so on. Levesque (2013) has catalogued some alarmingly simple questions which fall in this category. (Marcus, 2013, gives an account of Levesque’s challenges that is accessible to a wider audience.) The other class of question-answering tasks on which Watson fails can be characterized as dynamic question-answering. These are questions for which answers may not be recorded in textual form anywhere at the time of questioning, or for which answers are dependent on factors that change with time. Two questions that fall in this category are given below (Govindarajulu et al. 2013):
Question3: If I have 4 foos and 5 bars, and if foos are not the same as bars, how many foos will I have if I get 3 bazes which just happen to be foos?
Question4: What was IBM’s Sharpe ratio in the last 60 days of trading?
Image result for alpha go

Closely following Watson’s victory, in March 2016, Google DeepMind’s AlphaGo defeated one of Go’s top-ranked players, Lee Seedol, in four out of five matches. This was considered a landmark achievement within AI, as it was widely believed in the AI community that computer victory in Go was at least a few decades away, partly due to the enormous number of valid sequences of moves in Go compared to that in Chess.[11] While this is a remarkable achievement, it should be noted that, despite breathless coverage in the popular press,[12] AlphaGo, while indisputably a great Go player, is just that. For example, neither AlphaGo nor Watson can understand the rules of Go written in plain-and-simple English and produce a computer program that can play the game. It’s interesting that there is one endeavor in AI that tackles a narrow version of this very problem: In general game playing, a machine is given a description of a brand new game just before it has to play the game (Genesereth et al. 2005). However, the description in question is expressed in a formal language, and the machine has to manage to play the game from this description. Note that this is still far from understanding even a simple description of a game in English well enough to play it.
But what if we consider the history of AI not from the perspective of philosophy, but rather from the perspective of the field with which, today, it is most closely connected? The reference here is to computer science. From this perspective, does AI run back to well before Turing? Interestingly enough, the results are the same: we find that AI runs deep into the past, and has always had philosophy in its veins. This is true for the simple reason that computer science grew out of logic and probability theory,[13] which in turn grew out of (and is still intertwined with) philosophy. Computer science, today, is shot through and through with logic; the two fields cannot be separated. This phenomenon has become an object of study unto itself (Halpern et al. 2001). The situation is no different when we are talking not about traditional logic, but rather about probabilistic formalisms, also a significant component of modern-day AI: These formalisms also grew out of philosophy, as nicely chronicled, in part, by Glymour (1992). For example, in the one mind of Pascal was born a method of rigorously calculating probabilities, conditional probability (which plays a particularly large role in AI, currently), and such fertile philosophico-probabilistic arguments as Pascal’s wager, according to which it is irrational not to become a Christian.

Image result for pascal wagers argument

That modern-day AI has its roots in philosophy, and in fact that these historical roots are temporally deeper than even Descartes’ distant day, can be seen by looking to the clever, revealing cover of the second edition (the third edition is the current one) of the comprehensive textbook Artificial Intelligence: A Modern Approach (known in the AI community as simply AIMA2e for Russell & Norvig, 2002).

cover of AIMA2e
Cover of AIMA2e (Russell & Norvig 2002)

What you see there is an eclectic collection of memorabilia that might be on and around the desk of some imaginary AI researcher. For example, if you look carefully, you will specifically see: a picture of Turing, a view of Big Ben through a window (perhaps R&N are aware of the fact that Turing famously held at one point that a physical machine with the power of a universal Turing machine is physically impossible: he quipped that it would have to be the size of Big Ben), a planning algorithm described in Aristotle’s De Motu AnimaliumFrege’s fascinating notation for first-order logic, a glimpse of Lewis Carroll’s (1958) pictorial representation of syllogistic reasoning, Ramon Lull’s concept-generating wheel from his 13th-century Ars Magna, and a number of other pregnant items (including, in a clever, recursive, and bordering-on-self-congratulatory touch, a copy of AIMA itself). Though there is insufficient space here to make all the historical connections, we can safely infer from the appearance of these items (and here we of course refer to the ancient ones: Aristotle conceived of planning as information-processing over two-and-a-half millennia back; and in addition, as Glymour (1992) notes, Artistotle can also be credited with devising the first knowledge-bases and ontologies, two types of representation schemes that have long been central to AI) that AI is indeed very, very old. Even those who insist that AI is at least in part an artifact-building enterprise must concede that, in light of these objects, AI is ancient, for it isn’t just theorizing from the perspective that intelligence is at bottom computational that runs back into the remote past of human history: Lull’s wheel, for example, marks an attempt to capture intelligence not only in computation, but in a physical artifact that embodies that computation.[14]
AIMA has now reached its the third edition, and those interested in the history of AI, and for that matter the history of philosophy of mind, will not be disappointed by examination of the cover of the third installment (the cover of the second edition is almost exactly like the first edition). (All the elements of the cover, separately listed and annotated, can be found online.) One significant addition to the cover of the third edition is a drawing of Thomas Bayes; his appearance reflects the recent rise in the popularity of probabilistic techniques in AI, which we discuss later.
One final point about the history of AI seems worth making.
It is generally assumed that the birth of modern-day AI in the 1950s came in large part because of and through the advent of the modern high-speed digital computer. This assumption accords with common-sense. After all, AI (and, for that matter, to some degree its cousin, cognitive science, particularly computational cognitive modeling, the sub-field of cognitive science devoted to producing computational simulations of human cognition) is aimed at implementing intelligence in a computer, and it stands to reason that such a goal would be inseparably linked with the advent of such devices. However, this is only part of the story: the part that reaches back but to Turing and others (e.g., von Neuman) responsible for the first electronic computers. The other part is that, as already mentioned, AI has a particularly strong tie, historically speaking, to reasoning (logic-based and, in the need to deal with uncertainty, inductive/probabilistic reasoning). In this story, nicely told by Glymour (1992), a search for an answer to the question “What is a proof?” eventually led to an answer based on Frege’s version of first-order logic (FOL): a (finitary) mathematical proof consists in a series of step-by-step inferences from one formula of first-order logic to the next. The obvious extension of this answer (and it isn’t a complete answer, given that lots of classical mathematics, despite conventional wisdom, clearly can’t be expressed in FOL; even the Peano Axioms, to be expressed as a finite set of formulae, require SOL) is to say that not only mathematical thinking, but thinking, period, can be expressed in FOL. (This extension was entertained by many logicians long before the start of information-processing psychology and cognitive science – a fact some cognitive psychologists and cognitive scientists often seem to forget.) Today, logic-based AI is only part of AI, but the point is that this part still lives (with help from logics much more powerful, but much more complicated, than FOL), and it can be traced all the way back to Aristotle’s theory of the syllogism.[15] In the case of uncertain reasoning, the question isn’t “What is a proof?”, but rather questions such as “What is it rational to believe, in light of certain observations and probabilities?” This is a question posed and tackled long before the arrival of digital computers.

2. What Exactly is AI?




MANKIND'S LAST INVENTION








Did you enjoy this post? Sign up for our "Picasso Creative Writing Newsletter" to get the TOP monthly posts, articles, reports and studies, like this.


Disclaimer: The facts and opinions expressed within this article are the personal opinions of the author. Picasso Creative Writing does not assume any responsibility or liability for the accuracy, completeness, suitability, or validity of any information in this article.

Top Monthly Posts

Inspirations of passions


Make your interests gradually wider and more impersonal, until bit by bit the walls of the ego recede, and your life becomes increasingly merged in the universal life. An individual human existence should be like a river — small at first, narrowly contained within its banks, and rushing passionately past rocks and over waterfalls. Gradually the river grows wider, the banks recede, the waters flow more quietly, and in the end, without any visible break, they become merged in the sea, and painlessly lose their individual being.


Bertrand Russel

WEBSITE

WEBSITE
Visit Us Today!