Anda di halaman 1dari 24

Frost & Sullivan Applauds IBM Watson for Ushering in the Next Generation of Cognitive Systems

Delivers Cloud-based Application into the Hands of Today's Connected Consumers


MOUNTAIN VIEW, Calif., Sept. 30, 2013 /PRNewswire/ -- IBM Watson Solutions has been recognized with the 2013 North America Frost & Sullivan Award for New Product Innovation for its new Engagement Advisor tool. Each year, Frost & Sullivan presents this award to the company that has developed an innovative product that leverages leading-edge technologies and produces value-added features and benefits for customers. Click here for the full multimedia experience of this release - http://bit.ly/15qCxsf IBM's high-profile Watson technology is being applied to the realm of customer engagement through the first broadly available commercialized offering. Delivered through cloud services, the new solution, IBM Watson Engagement Advisor, is a technology breakthrough that allows brands to crunch big data in record time to transform the way they engage clients in key functions such as customer service, marketing and sales. The application will empower a brand's customer service agents to provide fast, data-driven answers, or sit directly in the hands of consumers via mobile device. In one simple click, the solution's "Ask Watson" feature will quickly help address customers' questions, offer feedback to guide their purchase decisions, and troubleshoot their problems. In essence, the IBM Watson Engagement Advisor answers questions and guides users through processes with plain-English dialogue while also building upon past conversations across channels throughout the lifetime of a relationship. The IBM Watson Engagement Advisor processes vast amounts of data, puts content in context for greater insights, weighs the results with confidence-rated responses, and adapts in much the same way that humans do. Unlike traditional systems that simply calculate rapidly, cognitive systems learn from their interactions with data and humans, and continuously reprogram themselves to offer better, more personalized results. "Watson's innovative technology continues to thrive and grow, from helping doctors improve patient care, to helping businesses put consumers first, in an increasingly connected world," saidManoj Saxena, General Manager, IBM Watson Solutions. "It is a tremendous honor to be recognized by Frost & Sullivan for this achievement for it shows how far Watson has come in inspiring business leaders across all industries to think differently about how technology can be used to transform daily lives." "The IBM Watson Engagement Advisor technology can listen to and respond to a series of follow-up questions and remember the previous questions that were posed," said Frost & Sullivan analyst Stephen Loynd. "In other words, IBM Watson combines technologies that allow the Engagement Advisor to understand natural language and human communication, generate and evaluate evidence-based hypothesis, as well as adapt and learn from user selections." IBM Watson Engagement Advisor's Early Customer Program is the first step toward what it hopes will be the transformation of the way individuals and companies engage with each other in the smartest possible way. Consider that the solution offers natural language processing to help understand the complexities of human speech and writing as both a means of user interactions and as a tool to unlock the potential of unstructured data (text, blogs, tweets, and email). "Watson also empowers customers to take action in an intuitive, fast, accurate and consistent manner at the point of action." noted Loynd. Frost & Sullivan Best Practices awards recognize companies in a variety of regional and global markets for demonstrating outstanding achievement and superior performance in areas such as leadership, technological innovation, customer service, and strategic product development. Industry analysts compare market participants

and measure performance through in-depth interviews, analysis, and extensive secondary research to identify best practices in the industry. About IBM Watson Solutions IBM Watson, named after IBM founder Thomas J. Watson, gained fame by beating human contestants on the television quiz show Jeopardy!, in February 2011. It was also a very public trial of an advanced form of computing: a cognitive system. A system that is not simply programmed but is trained to learn based on interactions and outcomes. A system that rivals a human's ability to answer questions posed in natural language with speed, accuracy and confidence, and that also brings both man and machine together. By accurately extracting the facts and quickly understanding relationships in large volumes of data, Watson represents a significant shift in system architecture and the ability for organizations to quickly analyze, understand and respond to Big Data. Watson's ability to answer complex questions posed in natural language with speed, accuracy and confidence has enormous potential to improve decision making across a variety of industries. Two years after its historic victory on Jeopardy!, IBM has put Watson to work in industries including customer engagement, healthcare and financial services. For more information on IBM Watson, please visit www.ibmwatson.com. To join the social discussion about Watson include the hashtag #ibmwatson Follow Watson on Facebook: www.facebook.com/ibmwatson About Frost & Sullivan Frost & Sullivan, the Growth Partnership Company, works in collaboration with clients to leverage visionary innovation that addresses the global challenges and related growth opportunities that will make or break today's market participants. Our "Growth Partnership" supports clients by addressing these opportunities and incorporating two key elements driving visionary innovation: The Integrated Value Proposition and The Partnership Infrastructure.

The Integrated Value Proposition provides support to our clients throughout all phases of their journey to visionary innovation including: research, analysis, strategy, vision, innovation and implementation.

The Partnership Infrastructure is entirely unique as it constructs the foundation upon which visionary innovation becomes possible. This includes our 360 degree research, comprehensive industry coverage, career best practices as well as our global footprint of more than 40 offices. For more than 50 years, we have been developing growth strategies for the global 1000, emerging businesses, the public sector and the investment community. Is your organization prepared for the next profound wave of industry convergence, disruptive technologies, increasing competitive intensity, Mega Trends, breakthrough best practices, changing customer dynamics and emerging economies?

IBM Watson: How the Jeopardy-winning supercomputer was born, and what it wants to do next Between them, they'd racked up over $5 million in winnings on the television quiz show Jeopardy. They were the best players the show had produced over its decadeslong lifetime: Ken Jennings had the longest unbeaten run at 74 winning appearances, while Brad Rutter had earned the biggest prize pot with a total of $3.25 million. Rutter and Jennings were Jeopardy-winning machines. And in early 2011, they agreed to an exhibition match against an opponent who'd never even stood behind a Jeopardy podium before. But this Jeopardy unknown had spent years preparing to take on the two giants in the $1m match, playing 100 games against past winners in an effort to improve his chances of winning. That opponent didn't smile, offered all his answers in the same emotionless tone, and wouldn't sit in the same room as his fellow contestants. He had to work too hard at keeping his cool and was so noisy, it was thought he was too disruptive to take the podium in person. He was kept in a back room, his answers piped into the studio. You wouldn't know by looking at him what he was thinking maybe you'd spot just a tinge of colour when he was puzzling over a particularly hard question. The contender started out with a run of winning answers he knew his Beatles songs, Olympic history, literary criminals. Sure, he wasn't too familiar with his Harry Potter, but he stretched out a lead nonetheless, leaving Rutter and Jennings trailing thousands of dollars behind. But questions on decades tripped him up, and Rutter fought back, piling up enough cash to unsettle anyone who'd bet on the outcome of the match. By the end of the first of the special exhibition match shows, you'd have been hard pushed to work out which was safest with your money. But then Double Jeopardy happened. The upstart powered through the big questions, winning even with guesses he was far from convinced about, and placing odd bets that came good. By the end of the second episode, the unknown had $25,000 more than his closest opponent, Rutter. Rutter and Jennings looked increasingly uncomfortable as it begun

to look like they'd get a pasting from the new boy, bobbing in frustration as their opponent buzzed in before them time and time again.

"I, for one, welcome our new computer overlords"Ken Jennings' response to losing to an exhibition
Jeopardy match to Watson
Jennings managed a late fightback in the third episode, but the new opponent gradually clawed back enough money to make it a close run. All three correctly answered the last question 'William Wilkinson's 'An account of the principalities of Wallachia and Moldavia' inspired this author's most famous novel' with 'who is Bram Stoker?' but Jennings appended his response with: "I for one welcome our new computer overlords". He, and Rutter, had lost to Watson a room-sized beast of a machine made by IBM and named after the company's founder Thomas J Watson. Watson, consisting of ten racks of ten Power 750 servers, had to be kept apart from the human contestants because of the roar of its cooling system and was represented at the podium by an avatar of IBM's Smarter Planet logo, whose moving lines would go green when Watson had cracked a thorny problem, orange when the answer was wrong. While Watson had the questions delivered in text rather than by listening to the quizmaster, he played the game like his human counterparts: puzzle over the question, buzz in, give the answer that's most likely to be right, tot up some prize money. And Watson was right a lot of the time. He won the game with $77,147 leaving Rutter and Jennings in the dust with $21,600 and $24,000 respectively. It turned out that the real Jeopardy winning machine was, well, a machine. Three nights, two people, one machine and $1 million: the victory of IBM's Watson over two human contestants on Jeopardy was the first, and possibly only, time the machine impressed itself on the general public's consciousness.

IBM Watson defeated two of Jeopardy's greatest champions. Image: IBM


But even before Watson secured its now-famous win, IBM was working on how to turn the cute quiz show-dominating machine into a serious business contender. Watson began life five years before its TV appearance, when IBM Research execs were searching for the next Grand Challenge for the company. IBM periodically runs these Grand Challenges, selected projects that pit man against machine, have international appeal, are easy to grasp and attract people into working in science and maths fields. Along with Watson, the Grand Challenges have spawned Deep Blue, the machine that famously beat grand master Garry Kasparov at chess, and the Blue Gene supercomputer. In the mid-2000s, IBM was on the lookout for its next Grand Challenge. Paul Horn, then director of IBM Research, was in favour of trying to develop a machine that could win the Turing Test, a way to measure machine intelligence by having a system attempt to fool a human into thinking that they're having a conversation with another person. As challenging as passing the Turing Test is no machine has yet done it it was felt that it wouldn't perhaps light up the public's imagination as other projects had. But were there any related challenges that could still bring those elements of competing against humans and understanding human speech together? "Beating a human in Jeopardy is a step in that direction the questions are complicated and nuanced, and it takes a unique type of computer to have a chance of beating a human by answering those type of questions. I was running the research division and I was bugging people in the organisation, in particular [former EVP in IBM's software group] Charles Lickel," Horn said. Lickel was inspired to take on the challenge of building a Jeopardy-winning computer after having dinner with his team. "We were at a steak house in Fishtail, New York. In the middle of dinner, all of a sudden the entire restaurant cleared out to the bar I turned to my team and asked 'what's going on?'. It was very odd. I hadn't really been following Jeopardy, but it turned out it was when Ken Jennings was having his long winning streak, and everyone wanted to find out if he would win again that night, and they'd gone to the bar to see," Lickel said. Jennings won once again that night, and still holds the longest unbeaten run on Jeopardy with 74 appearances undefeated.

"They initially said no, it's a silly project to work on, it's too gimmicky, it's not a real computer science test, and we probably can't do it anyway"IBM researchers' first take on building a machine
that could win Jeopardy
The idea of a quiz champion machine didn't immediately win his team around, with many of Lickel's best staff saying they didn't believe a machine could compete with, let alone beat, flesh and blood champions. "They initially said no, it's a silly project to work on, it's too gimmicky, it's not a real computer science test, and we probably can't do it anyway," said Horn. Nonetheless, a team sufficiently adventuresome to take on the challenge of building a Jeopardy winner was found. It was still a small project and thoughts of commercialisation weren't uppermost in anyone's mind Grand Challenges were demonstration projects, whose return for the company was more in the buzz they created than in a contribution to the bottom line. If commercialisation happened, great but for now, Watson was just a bit of a moonshot for IBM. Due to the initial size of the effort, it was funded from the research group's everyday budget and didn't require sign-off from Big Blue's higher-ups, meaning it could operate free of the commercial pressures of most projects. Jeopardy's quirk is that instead of the quizmaster setting questions and contestants providing the answer, the quizmaster provides the answers, known as 'clues' in Jeopardy-speak, to which contestants provide a question. Not only would the machine need to be able to produce questions for the possible clues that might come its way on Jeopardy, it would need to be able to first pull apart Jeopardy's tricksy clues work out what was being asked - before it could even provide the right response.

Jeopardy host Alex Trebeck and the IBM team talk about the exhibition match with Watson. Image: IBM
For that, IBM developed DeepQA, a massively parallel software architecture that examined natural language content in both the clues set by Jeopardy and in Watson's own stored data, along with looking into the structured information it holds. The component-based system, built on a series of pluggable components for searching and weighting information, took about 20 researchers three years to reach a level where it could tackle a quiz show performance and come out looking better than its human opponents. First up, DeepQA works out what the question is asking, then works out some possible answers based on the information it has to hand, creating a thread for each. Every thread uses hundreds of algorithms to study the evidence, looking at factors including what the information says, what type of information it is, its reliability, and how likely it is to be relevant, then creating an individual weighting based on what Watson has previously learned about how likely they are to be right. It then generated a ranked list of answers, with evidence for each of its options. The information that DeepQA would eventually be able to query for Jeopardy was 200 million pages of information, from a variety of sources. All the information had to be locally stored Watson wasn't allowed to connect to the Internet during the quiz and understood, queried and processed at a fair clip: in a Jeopardy's case, Watson had to spit out an answers in a matter of seconds to make sure it was first to the buzzer. "When I left IBM in end of 2007, Watson was an embryonic project," said Horn. "It had three people in Charles Lickel's area that got the data from the old Jeopardy programmes and were starting to train the machine. It could barely beat a five year old at that time. The projection was 'god knows how long it would take to beat an adult, let alone a grand champion'. Then over time when it looked like they started to have a chance, Dave under the leadership of John Kelly grew the project into something substantial," said Horn. While there's still debate over exactly when the idea of making Watson pay its way finally took shape at IBM, when Watson took to the stage for its Jeopardy-winning performance, the show featured IBM execs talking about possible uses for the system in healthcare, and moves to establish a Watson business unit began not long after the Jeopardy show aired.

IBM's then-CEO Sam Palmisano and its current CEO Ginni Rometty, under whose remit Watson fell at the time, began discussions in the weeks after the win, and the project was moved from under the wing of IBM Research and into the IBMSoftware group. In August of 2011, the Watson business unit proper came into being, headed up by Manoj Saxena, who'd joined IBM some years earlier when the company he worked for, Webify, was acquired by IBM. Saxena was the unit's employee number one. Within three months, he had been joined by 107 new Watson staffers, mostly technologists in the fields of natural language processing and machine learning. Healthcare had already been suggested as the first industry Watson should target for commercial offerings, but there were no plans to confine it just to medicine. Any information-intensive industry was fair game, anywhere were there were huge volumes of unstructured and semi-structured data that Watson could ingest, understand and process quicker than its human counterparts. Healthcare might be a starting point, but banking, insurance, and telecoms were all in the firing line. But how do you turn a quiz show winner into something more business-like? First job for the Watson team was to get to grips with the machine they'd inherited from IBM Research, understand the 41 separate subsystems that went into Watson, and work out what needed to be fixed up before Watson could put on its suit and tie. In the Watson unit's first year, the system got sped up and slimmed down. "We serialised the threads and how the software worked and drove up the performance," Saxena said. "The system today compared to the Jeopardy system is approximately 240 percent faster and it is one-sixteenth the size. The system that was the size of a master bedroom will now run in a system the size of the vegetable drawer in your double-drawer refrigerator." Another way of looking at it: a single Power 750 server, measuring nine inches high, 18 inches wide and 36 inches deep, and weighing in at around 100 pounds. Having got the system to a more manageable size for businesses, it set about finding customers to take it on. IBM had healthcare pegged as its first vertical for Watson from the time of the Jeopardy win. However, while Jeopardy Watson and healthcare Watson share a common heritage, they're distinct entities: IBM forked the Watson code for its commercial incarnation.

"The system that was the size of a master bedroom will now run in a system the size of the vegetable drawer in your double-drawer refrigerator"Watson VP Manoj Saxena on the shrinking
Watson
Jeopardy Watson had one task get an answer, understand it, and find the question that went with it. It was a single user system had three quizmasters put three answers to it, it would have thrown the machine into a spin. Watson had to be retooled for a scenario where tens, hundreds, however many, clinicians would be asking questions at once, and not single questions either complex conversation with several related queries one after the other, all asked in non-standard formats. And, of course, there was the English language itself with all its messy complexity. "There were fundamental areas of innovation that had to be done to go beyond Jeopardy - there was a tremendous amount of pre-processing, post-processing and tooling that we have added around the core engines," added Saxena. "It's the equivalent of getting a Ferrari engine then trying to build a whole race car around it. What we inherited was the core engine, and we said 'Okay, let's build a new thing that does all sort of things the original Jeopardy system wasn't required to do'." To get Watson from Jeopardy to oncology, there were three processes that the Watson team went through: content adaptation, training adaptation, and functional adaptation or, to put it another way, feeding it medical information and having it weighted appropriately; testing it out with some practice questions; then making any technical adjustments needed tweaking taxonomies, for example. The content adaptation for healthcare followed the same path as getting Watson up to speed for the quiz show: feed it information, show it what right looks like, then let it guess what right looks like and correct it if it's wrong. In Jeopardy, that meant feeding it with thousands of question and answer pairs from the show, and then demonstrating what a right response looked like. Then it was given just the answers, and asked to come up with the questions. When it went wrong, it was corrected. Through machine learning, it would begin to get a handle on this answer-question thing, and modify its algorithms accordingly.

Watson has moved on to solutions that can power searches from smartphones. Image: IBM

"It would be fed many cases where the history was known and proper treatment was known, and then, analogous to training for Jeopardy, it's been given cases and then it suggests therapies," Kohn said. Some data came from what IBM describes as a Jeopardylike game called Doctor's Dilemma, whose questions include 'the syndrome characterized by joint pain, abdominal pain, palpable purpura, and a nephritic sediment?'. (The answer, of course, is Henoch-Schnlein purpura.) The training, says Kohn, "is an ongoing process, and Watson is rapidly improving its ability to make reasonable recommendations the oncologists think are helpful." By 2012, there were two healthcare organisations that had started piloting Watson. Wellpoint, one of the US biggest insurers, was one of the pair of companies that helped define the application of Watson in health. The other was Sloane Kettering Memorial Cancer Centre (SKMCC), an organisation IBM already had a relationship with and which is located not far from both IBM's own Armonk headquarters and the research laboratories in York Heights, New York that still house the first Watson. And it was this relationship that helped spur Watson's first commercial move into working in the field of cancer therapies. While using Watson as a diagnosis tool might be its most obvious application in healthcare, using it to assist in choosing the right therapy for a cancer patient made even more sense. SKMCC was a tertiary referral centre by the time patients arrived, they already had their diagnosis. So Watson was destined first to be an oncologist's assistant, digesting reams of data SKMCC's own, medical journals, articles, patients notes and more along with patients' preferences to come up with suggestions for treatment options. Each would be weighted accordingly, depending on how relevant Watson calculated they were. Unlike its Jeopardy counterpart, healthcare Watson also has the ability to go online not all its data has to be stored. And while Watson had two million pages of medical data from 600,000 sources to swallow, it could still make use of the general knowledge garnered for Jeopardy details from Wikipedia, for example. (What it doesn't use, however, is the Urban Dictionary. Fed into Watson late last year, it was reportedly removed after answering a researcher's query with the word "bullshit". "We did find some interesting responses, so we had to shut that down," Saxena

said diplomatically. "That is not to be repeated, because it would be seen as very improper in certain cases, and we had to teach Watson the right business behaviour.")

This chart, done a year after Watson's Jeopardy win, shows some of its rapid progress. Image: IBM
As such, the sources are now medical publications like Nature and the British Medical Journal. And there are other safety nets too. "In the teaching phase, a doctor - a cancer care specialist in this case - sits down and asks questions of Watson and corrects the machine learning. The doctor and a data scientist are sitting next to each other, correcting Watson. Spurious material, or conflicted material or something from a pharmaceutical company that the doctor feels may be biased - that is caught during the training cycle," added Saxena. WellPoint and SKMCC used Watson as the basis for systems that could read and understand volumes of medical literature and other information patients' treatment and family histories, for example, as well as clinical trials and articles in medical journals to assist oncologists by recommending courses of treatment. A year of working with both organisations has produced commercial products: Interactive Care Insights for Oncology, and the WellPoint Interactive Care Guide and

Interactive Care Reviewer. Interactive Care Insights for Oncology provides suggestions for treatment plans for lung cancer patients, while New WellPoint Interactive Care Guide and Interactive Care Reviewer reviews clinicians' suggested treatments against their patients' plans and is expected to be in use at 1,600 healthcare providers this year. Watson has bigger ambitions than a clinician's assistant, however. Its medical knowledge is around that of a first year medical student, according to IBM, and the company hopes to have Watson pass the general medical licensing board exams in the not too distant future. "Our work today is in the very early stages around practice of medicine, around chronic care diseases. We're starting with cancer and we will soon add diabetes, cardiology, mental health, other chronic diseases. And then our work is on the payment side, where we are streamlining the authorisation and approval process between hospitals, clinics and insurance companies," Saxena said. The ultimate aim for Watson is to be an aid to diagnosis rather than just suggesting treatments for cancer, as it does today, it could assist doctors in identifying the diseases that bring people to the clinics in the first place. Before then, there is work to be done. While big data vendors often trumpet the growth of unstructured data and the abandoning of relational databases, for Watson, it's these older sources of data that present more of a problem. "Watson works specifically with natural language free text or text-like information and that's approximately 80 percent of the huge volumes of healthcare information available to us," said Kohn. "Then there's the 20 percent that is structured data basically, numerical data - or images, like MRIs, CAT scans, and so on. Watson does not process structured data directly and it doesn't interpret images. It can interpret the report attached to an image, but not the image itself." In addition, IBM is working on creating a broader healthcare offering that will take it beyond its oncology roots. "Even though Watson is working with these two organisations, what the designers and computer scientists are focusing on [is] that whatever they develop is generalisable, it's not just niche for cancer therapy and especially for the several cancers we're working with. We're using it as a learning process to create algorithms and methodologies that would be readily generalisable to any area of healthcare.

They don't have to have to say, right, we have oncology under control, now let's start again with family practice or cardiology," Kohn said.

Citi and IBM have been collaborating on business systems since the days of the early mainframes. Image: IBM
Watson has also already found some interest in banking. Citi is using Watson toimprove customer experience with the bank and create new services. It's easy to see how Watson could be put to use, say, deciding whether a borderline-risk business customer is likely to repay the loan they've applied for, or used to pick out cases of fraud or identity theft before customers may be aware they're happening. Citi is still early in its Watson experiments. A spokeswoman said the company is currently just "exploring use cases". From here on in, rather than being standalone products, the next Watson offerings to hit the market will be embedded into products in the IBM Smarter Planet product line. They're expected to appear in the second half of the year. The first such Smarter Planet product appeared in May: IBM Engagement Advisor. The idea behind the Engagement Advisor, aimed at contact centres, is that customer service agents can query their employers' databases and other information sources using natural language while they're conducting helpline conversations with their clients. One of the companies testing out the service is Australia's ANZ bank, where it will be assisting call centre staff with making financial services recommendations to people who ring up.

Ask Watson goes far beyond what Apple's Siri can do, IBM believes. Image: IBM Watson could presumably one day scour available evidence for the best time to find someone able to talk and decide the communication channel most likely to generate a positive response, or pore over social media for disgruntled customers and provide answers to their problems in natural language.
There are also plans to change how Watson's delivered, too. Instead of just interacting with it via a call centre worker, customers will soon be able to get to grips with the Engagement Advisor. Rather than have some call centre agent read out Watson generated information to a customer with, say, a fault with their new washing machine or a stock-trader wanting advice on updating their portfolio, the consumer and trader could just quiz Watson directly from their phone or tablet, by typing their query straight into a business' app. Apps with Watson under the hood should be out in the latter half of this year, according to Forbes.

IBM execs have also previously suggested that Watson could end up a supercharged version of Siri, where people will be able to speak directly into their phone and pose a complex question for Watson to answer a farmer holding up his smartphone to take video of his fields, and asking Watson when to plant corn, for example. IBM is keen to spell out the differences between Watson and Siri. "Watson knows what it knows and by listening, learning and using human-like thinking capabilities uncovers insights from Big Data, Watson also quickly ascertains what it doesn't know. Siri, on the other hand, simply looks for keywords to search the web for lists of options that it chooses one from," the company says. But, the comparison holds: Watson could certainly have a future as your infinitely knowledgeable personal assistant.

"Watson also quickly ascertains what it doesn't know. Siri, on the other hand, simply looks for keywords to search the web for lists of options that it chooses one from"IBM on Watson vs Siri
While adding voice-recognition capabilities to Watson should be no great shakes for IBM given its existing partnerships, such a move would require Watson to be able to recognise images (something IBM's already working on) that would require Watson to query all sorts of sources of information including newspapers, books, photos, repositories of data that have been made publicly available, social media and the internet at large. That Watson should take on such a role in the coming years, especially if the processing goes on in an IBM datacentre and not on the mobile itself, as you would expect, is certainly within the realms of the possible. As IBM seeks to embed Watson's capabilities into more and more products, how far does the company think Watson will spread in the coming years? It will only say gnomically, "as we continue to scale our capabilities, we intend to make Watson available as a set of services in many industries." Want a better answer? Better ask Watson.

Watson (computer)
From Wikipedia, the free encyclopedia

"IBM Watson" redirects here. For the laboratory, see Thomas J. Watson Research Center.

Watson's avatar, inspired by the IBM "smarter planet" logo[1]

Watson is an artificially intelligent computer system capable of answering questions posed in natural language,[2] developed in IBM's DeepQA project by a research team led by principal investigator David Ferrucci. Watson was named after IBM's Thomas J. Watson.[3][4] The computer system was specifically developed to answer questions on the quiz show Jeopardy!.[5] In 2011, Watson competed on Jeopardy! against former winners Brad Rutter and Ken Jennings.[3][6][7] Watson received the first prize of $1 million.[8] Watson had access to 200 million pages of structured and unstructured content consuming four terabytes of disk storage[9] including the full text of Wikipedia,[10]but was not connected to the Internet during the game.[11][12] For each clue, Watson's three most probable responses were displayed on the television screen. Watson consistently outperformed its human opponents on the game's signaling device, but had trouble responding to a few categories, notably those having short clues containing only a few words. In February 2013, IBM announced that Watson software system's first commercial application would be for utilization management decisions in lung cancertreatment at Memorial SloanKettering Cancer Center in conjunction with health insurance company WellPoint.[13] IBM Watsons business chief Manoj Saxena says that 90% of nurses in the field who use Watson now follow its guidance. [14]

Architecture[edit]

The high-level architecture of IBM's DeepQA used in Watson[15]

Watson is a Question answering (QA) computing system built by IBM.[2] IBM describes it as "an application of advanced Natural Language Processing, Information Retrieval, Knowledge Representation and Reasoning, and Machine Learning technologies to the field of open domain question answering" which is "built on IBM's DeepQA technology for hypothesis generation, massive evidence gathering, analysis, and scoring."[2]

Hardware[edit]
According to IBM: Watson is a workload optimized system designed for complex analytics, made possible by integrating massively parallel POWER7processors and the IBM DeepQA software to answer Jeopardy! questions in under three seconds. Watson is made up of a cluster of ninety IBM Power 750 servers (plus additional I/O, network and cluster controller nodes in 10 racks) with a total of 2880 POWER7 processor cores and 16 Terabytes of RAM. Each Power 750 server uses a 3.5 GHz POWER7 eight core processor, with four threads per core. The POWER7 processor's massively parallel processing capability is an ideal match for Watson's IBM DeepQA software which is embarrassingly parallel (that is a workload that is easily split up into multiple parallel tasks).[16] According to John Rennie, Watson can process 500 gigabytes, the equivalent of a million books, per second.[17] IBM's master inventor and senior consultant Tony Pearson estimated Watson's hardware cost at about $3 million[18] and with 80 TeraFLOPs would be placed 94th on the Top 500 Supercomputers list.[19] According to Rennie, the content was stored in Watson's RAM for the game because data stored on hard drives are too slow to access.[17]

Software[edit]
Watson's software was written in various languages, including Java, C++, and Prolog, and uses Apache Hadoop framework for distributed computing, Apache UIMA (Unstructured Information Management Architecture) framework, IBMs DeepQA software and SUSE Linux Enterprise Server 11 operating system.[9][20][21] [...] more than 100 different techniques are used to analyze natural language, identify sources, find and generate hypotheses, find and score evidence, and merge and rank hypotheses. [22]

Data[edit]
The sources of information for Watson include encyclopedias, dictionaries, thesauri, newswire articles, and literary works. Watson also used databases, taxonomies, and ontologies. Specifically, DBPedia,WordNet, and Yago were used.[23] The IBM team provided Watson with millions of documents, including dictionaries, encyclopedias, and other reference material that it could use to build its knowledge.[12] Although Watson was not connected to the Internet during the game,[24] it contained 200 million pages of structured and unstructured content consuming four terabytes of disk storage,[9] including the full text of Wikipedia.[10]

Operation[edit]

When presented with a question Watson would use thousands of algorithms simultaneously to find answers, then compile those answers to determine its level of confidence in any given answer.

The computer's techniques for unraveling Jeopardy!clues sounded just like mine. That machine zeroes in on key words in a clue, then combs its memory (in Watson's case, a 15-terabyte data bank of human knowledge) for clusters of associations with those words. It rigorously checks the top hits against all the contextual information it can muster: the category name; the kind of answer being sought; the time, place, and gender hinted at in the clue; and so on. And when it feels "sure" enough, it decides to buzz. This is all an instant, intuitive process for a human Jeopardy! player, but I felt convinced that under the hood my brain was doing more or less the same thing.

Ken Jennings[25]

When playing Jeopardy! all players must wait until host Alex Trebek reads each clue in its entirety, after which a light is lit as a "ready" signal; the first to activate their buzzer button wins the chance to respond.[12][26] Watson received the clues as electronic texts at the same moment they were made visible to the human players.[12] It would then parse the clues into different keywords and sentence fragments in order to find statistically related phrases.[12] Watson's main innovation was not in the creation of a new algorithm for this operation but rather its ability to quickly execute thousands of proven language analysis algorithms simultaneously to find the correct answer.[12][27] The more algorithms that find the same answer independently the more likely Watson is to be correct.[12] Once Watson has a small number of potential solutions, it is able to check against its database to ascertain whether the solution makes sense. [12] In a sequence of 20 mock games, human participants were able to use the average six to seven seconds that Watson needed to hear the clue and decide whether to signal for responding.[12] During that time, Watson also has to evaluate the response and determine whether it is sufficiently confident in the result to signal.[12] Part of the system used to win the Jeopardy!contest was the electronic circuitry that receives the "ready" signal and then examined whether Watson's confidence level was great enough to activate the buzzer. Given the speed of this circuitry compared to the speed of human reaction times, Watson's reaction time was faster than the human contestants except when the human anticipated (instead of reacted to) the ready signal.[28] After signaling, Watson speaks with an electronic voice and gives the responses

inJeopardy!'s question format.[12] Watson's voice was synthesized from recordings that actor Jeff Woodman made for an IBM text-to-speech program in 2004.[29]

Comparison with human players[edit]

Watson, Ken Jennings, and Brad Rutterin their Jeopardy! exhibition match.

Watson's basic working principle is to parse keywords in a clue while searching for related terms as responses. This gives Watson some advantages and disadvantages compared with human Jeopardy! players.[30] Watson has deficiencies in understanding the contexts of the clues. As a result, human players usually generate responses faster than Watson, especially to short clues.[12] Watson's programming prevents it from using the popular tactic of buzzing before it is sure of its response.[12] Watson has consistently better reaction time on the buzzer once it has generated a response, and is immune to human players' psychological tactics.[12][31] The Jeopardy! staff used different means to notify Watson and the human players when to buzz, [28] which was critical in many rounds.[31] The humans were notified by a light, which took them tenths of a second to perceive.[32][33] Watson was notified by an electronic signal and could activate the buzzer within about eight milliseconds.[34] The humans tried to compensate for the perception delay by anticipating the light,[35] but the variation in the anticipation time was generally too great to fall within Watson's response time.[31] Watson did not operate to anticipate the notification signal.[33][35]

Development history[edit]
Since Deep Blue's victory over Garry Kasparov in chess in 1997, IBM had been on the hunt for a new challenge. In 2004, IBM Research manager Charles Lickel, over dinner with coworkers, noticed that the restaurant they were in had fallen silent. He soon discovered the cause of this evening hiatus: Ken Jennings, who was then in the middle of his successful 74-game run on Jeopardy!. Nearly the entire restaurant had piled toward the televisions, mid-meal, to watch the phenomenon. Intrigued by the quiz show as a possible challenge for IBM, Lickel passed the idea on, and in 2005, IBM Research executivePaul Horn backed Lickel up, pushing for someone in his department to take up the challenge of playing Jeopardy! with an IBM system. Though he initially had trouble finding any research staff willing to take on what looked to be a much more complex challenge than the wordless game of chess, eventually David Ferrucci took him up on the offer.[36] In competitions managed by the United States government, Watson's predecessor, a system named Piquant, was usually able to respond correctly to only about 35% of clues and often required several minutes to respond.[37][38][39] To compete successfully onJeopardy!,

Watson would need to respond in no more than a few seconds, and at that time, the problems posed by the game show were deemed to be impossible to solve.[12] In initial tests run during 2006 by David Ferrucci, the senior manager of IBM's Semantic Analysis and Integration department, Watson was given 500 clues from past Jeopardy! programs. While the best real-life competitors buzzed in half the time and responded correctly to as many as 95% of clues, Watson's first pass could get only about 15% correct. During 2007, the IBM team was given three to five years and a staff of 15 people to solve the problems.[12] By 2008, the developers had advanced Watson such that it could compete with Jeopardy! champions.[12] By February 2010, Watson could beat humanJeopardy! contestants on a regular basis.[40] While primarily an IBM effort, Watson's development team includes faculty and students from Rensselaer Polytechnic Institute, Carnegie Mellon University, University of Massachusetts Amherst, University of Southern California's Information Sciences Institute, University of Texas at Austin, Massachusetts Institute of Technology, New York Medical College, University of Trento, and Queens College, City University of New York.[15][41]

Competing on Jeopardy![edit]
Preparation[edit]

Watson demo at an IBM booth at a trade show

In 2008, IBM representatives communicated with Jeopardy! executive producer Harry Friedman about the possibility of having Watson compete against Ken Jennings and Brad Rutter, two of the most successful contestants on the show, and the program's producers agreed.[12][42] Watson's differences with human players had generated conflicts between IBM and Jeopardy! staff during the planning of the competition.[30] IBM repeatedly expressed concerns that the show's writers would exploit Watson's cognitive deficiencies when writing the clues, thereby turning the game into a Turing test. To alleviate that claim, a third party randomly picked the clues from previously written shows that were never broadcast.[30] Jeopardy! staff also showed concerns over Watson's reaction time on the buzzer. Originally Watson signaled electronically, but show staff requested that it press a button physically, as the human contestants would.[43] Even with a robotic "finger" pressing the buzzer, Watson remained faster than its human competitors. Ken Jennings noted, "If you're trying to win on the show, the buzzer is all," and that

Watson "can knock out a microsecond-precise buzz every single time with little or no variation. Human reflexes can't compete with computer circuits in this regard."[31][35][44] Stephen Baker, a journalist who recorded Watson's development in his book "Final Jeopardy", reported that the conflict between IBM and Jeopardy! became so serious in May 2010 that the competition was almost canceled.[30] Watson learns from his mistakes, for example, this mistake during a practice round. He was given the clue "This trusted friend was the first non-dairy powdered creamer," to which he replied, "What is milk?", mistaking the clue as asking for a dairy product. As part of the preparation, IBM constructed a mock set in a conference room at one of its technology sites to model the one used on Jeopardy! Human players, including former Jeopardy! contestants, also participated in mock games against Watson with Todd Alan Crain of The Onion playing host.[12] About 100 test matches were conducted with Watson winning 65% of the games.[45] To provide a physical presence in the televised games, Watson was represented by an "avatar" of a globe, inspired by the IBM "smarter planet" symbol. Forty-two colored threads criss-crossed the globe, to represent Watson's state of thought; the number 42 was an in-joke referring to the novel The Hitchhiker's Guide to the Galaxy.[25] Joshua Davis, the artist who designed the avatar for the project, explained to Stephen Baker that there are 36 triggerable states that Watson was able to use throughout the game to show its confidence in responding to a clue correctly; he had hoped to be able to find forty-two, to add another level to the Hitchhiker's Guide reference, but he was unable to pinpoint enough game states.[46] A practice match was recorded on January 13, 2011, and the official matches were recorded on January 14, 2011. All participants maintained secrecy about the outcome until the match was broadcast in February.[47]

Practice match[edit]
In a practice match before the press on January 13, 2011, Watson won a 15-question round against Ken Jennings and Brad Rutter with a score of $4,400 to Jennings's $3,400 and Rutter's $1,200, though Jennings and Watson were tied before the final $1,000 question. None of the three players responded incorrectly to a clue.[48]

First match[edit]
The first round was broadcast February 14, 2011, and the second round, on February 15, 2011. The right to choose the first category had been determined by a draw won by Rutter.[49] Watson, represented by a computer monitor display and artificial voice, responded correctly to the second clue and then selected the fourth clue of the first category, a deliberate strategy to find the Daily Double as quickly as possible.[50] Watson's guess at the Daily Double location was correct. At the end of the first round, Watson was tied with Rutter at $5,000; Jennings had $2,000.[49] Watson's performance was characterized by some quirks. In one instance, Watson repeated a reworded version of an incorrect response offered by Jennings (Jennings said "What are the '20s?" in reference to the 1920s. Then Watson said "What is 1920s?") Because Watson could not recognize other contestants'

responses, it did not know that Jennings had already given the same response. In another instance, Watson was initially given credit for a response of "What is leg?" after Jennings incorrectly responded "What is: he only had one hand?" to a clue about George Eyser (The correct response was, "What is: he's missing a leg?"). Because Watson, unlike a human, could not have been responding to Jennings's mistake, it was decided that this response was incorrect. The broadcast version of the episode was edited to omit Trebek's original acceptance of Watson's response.[51] Watson also demonstrated complex wagering strategies on the Daily Doubles, with one bet at $6,435 and another at $1,246.[52] Gerald Tesauro, one of the IBM researchers who worked on Watson, explained that Watson's wagers were based on its confidence level for the category and a complex regression model called the Game State Evaluator.[53] Watson took a commanding lead in Double Jeopardy!, correctly responding to both Daily Doubles. Watson responded to the second Daily Double correctly with a 32% confidence score.[52] Although it wagered only $947 on the clue, Watson was the only contestant to miss the Final Jeopardy! response in the category U.S. CITIES ("Its largest airport was named for a World War II hero; its second largest, for a World War II battle"). Rutter and Jennings gave the correct response of Chicago, but Watson's response was "What is Toronto?????"[52][54][55] Ferrucci offered reasons why Watson would appear to have guessed a Canadian city: categories only weakly suggest the type of response desired, the phrase "U.S. city" didn't appear in the question, there are cities named Toronto in the U.S., and Toronto, Ontario has an American League baseball team.[56] Dr. Chris Welty, who also worked on Watson, suggested that it may not have been able to correctly parse the second part of the clue, "its second largest, for a World War II battle" (which was not a standalone clause despite it following a semicolon, and required context to understand that it was referring to a second-largest airport).[57] Eric Nyberg, a professor at Carnegie Mellon University and a member of the development team, stated that the error occurred because Watson does not possess the comparative knowledge to discard that potential response as not viable.[55] Although not displayed to the audience as with non-Final Jeopardy! questions, Watson's second choice was Chicago. Both Toronto and Chicago were well below Watson's confidence threshold, at 14% and 11% respectively. (This lack of confidence was the reason for the multiple question marks in Watson's response.) The game ended with Jennings with $4,800, Rutter with $10,400, and Watson with $35,734.[52]

Second match[edit]
During the introduction, Trebek (a Canadian native) joked that he had learned Toronto was a U.S. city, and Watson's error in the first match prompted an IBM engineer to wear a Toronto Blue Jays jacket to the recording of the second match.[58] In the first round, Jennings was finally able to choose a Daily Double clue,[59] while Watson responded to one Daily Double clue incorrectly for the first time in the Double Jeopardy! Round.[60] After the first round, Watson placed second for the first time in the competition after Rutter and Jennings were briefly successful in increasing their dollar values before Watson could respond.[60][61] Nonetheless, the final result ended with

a victory for Watson with a score of $77,147, besting Jennings who scored $24,000 and Rutter who scored $21,600.[62]

Final outcome[edit]
The prizes for the competition were $1 million for first place (Watson), $300,000 for second place (Jennings), and $200,000 for third place (Rutter). As promised, IBM donated 100% of Watson's winnings to charity, with 50% of those winnings going to World Vision and 50% going to World Community Grid.[63] Likewise, Jennings and Rutter donated 50% of their winnings to their respective charities. [64] In acknowledgment of IBM and Watson's achievements, Jennings made an additional remark in his Final Jeopardy! response: "I for one welcome our new computer overlords", echoing a similar memetic reference to the episode "Deep Space Homer" on The Simpsons, in which TV news presenter Kent Brockman speaks of welcoming "our new insect overlords".[65][66] Jennings later wrote an article for Slate, in which he stated "IBM has bragged to the media that Watson's question-answering skills are good for more than annoying Alex Trebek. The company sees a future in which fields like medical diagnosis,business analytics, and tech support are automated by question-answering software like Watson. Just as factory jobs were eliminated in the 20th century by new assembly-line robots, Brad and I were the first knowledgeindustry workers put out of work by the new generation of 'thinking' machines. 'Quiz show contestant' may be the first job made redundant by Watson, but I'm sure it won't be the last."[25]

Public reaction[edit]
Philosopher John Searle argues that Watsondespite impressive capabilitiescannot actually think.[67] Drawing on his Chinese room thought experiment, Searle claims that Watson, like other computational machines, is capable only of manipulating symbols, but has no ability to understand the meaning of those symbols; however, Searle's experiment has its detractors.[68]

Match against members of the United States Congress[edit]


On February 28, 2011, Watson played an untelevised exhibition match of Jeopardy! against five members of the United States House of Representatives: Rush D. Holt, Jr. (D-NJ, a former Jeopardy!contestant), Jim Himes (D-CT), Jared Polis (D-CO), Nan Hayworth (R-NY) and Bill Cassidy (R-LA). IBM organized the event to "foster a conversation about how technology can positively impact society".[69] In the only round he played, Holt led with Watson in second place. However, combining the scores between all matches, the final score was $40,300 for Watson and $30,000 for the congressional players combined.[70]

Future uses of software system[edit]


According to IBM, "The goal is to have computers start to interact in natural human terms across a range of applications and processes, understanding the questions that humans ask and providing answers that humans can understand and justify."[40] It has been suggested by Robert C. Weber, IBM's general counsel, that Watson may be used for legal research.[71]

Watson is based on commercially available IBM Power 750 servers that have been marketed since February 2010. IBM also intends to market the DeepQA software to large corporations, with a price in the millions of dollars, reflecting the $1 million needed to acquire a server that meets the minimum system requirement to operate Watson. IBM expects the price to decrease substantially within a decade as the technology improves.[12] Commentator Rick Merritt said that "there's another really important reason why it is strategic for IBM to be seen very broadly by the American public as a company that can tackle tough computer problems. A big slice of Big Blue's pie comes from selling to the U.S. government some of the biggest, most expensive systems in the world."[72] On January 30, 2013, it was announced that Rensselaer Polytechnic Institute will receive a successor version of Watson. It will be housed at the Institute's technology park and be available to researchers and students.

Healthcare[edit]
As of February 2011, IBM and Nuance Communications Inc. are partnering for the research project to develop a commercial product during the next 18 to 24 months that will exploit Watsons capabi lities as aclinical decision support system to aid the diagnosis and treatment of patients. Physicians at Columbia University are helping identify critical issues in the practice of medicine where the Watson technology may be able to contribute and physicians at the University of Maryland are working to identify the best way that a technology like Watson could interact with medical practitioners to provide the maximum assistance.[73][74][75] In September 2011, IBM and Wellpoint, a major healthcare solutions provider in the United States, announced a partnership to utilize Watson's data crunching capability to help suggest treatment options and diagnoses to doctors.[76] Just as Watson analyzed massive data in Jeopardy! to reach a set of hypotheses and list several of the most likely outcomes, it could help doctors in diagnosing patients. Watson could analyze the patient's specific symptoms, medical history, and hereditary history, and synthesize that data with available unstructured and structured medical information, including published medical books and articles. IBM has made it clear that Watson does not intend to replace doctors, but assist them to avoid medical errors and sharpen medical diagnosis with the help of its advanced analytics technology. IBM intends to use Watson in other information intensive fields as well, such as telecommunications, financial services, and government.[77][78][79] In December 2011, in what has been compared to IBM Watson, Microsoft and GE announced a partnership to utilize technology in improving healthcare. They aim to use analytics, high-performance computing and software technologies to deliver patient outcomes as well as clinical applications. [80] IBM announced a partnership with Cleveland Clinic in October 2012. The company has sent Watson to the Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, where it will increase its health expertise and assist medical professionals in diagnosing and treating patients. The medical facility

will utilize Watson's ability to store and process large quantities of information to help speed up and increase the accuracy of the diagnostic process. "Cleveland Clinic's collaboration with IBM is exciting because it offers us the opportunity to teach Watson to 'think' in ways that have the potential to make it a powerful tool in medicine," said C. Martin Harris, MD, chief information officer of Cleveland Clinic. [81] On February 8, 2013, IBM announced that oncologists at Maine Center for Cancer Medicine and Westmed Medical Group in New York have started to test the Watson supercomputer system in an effort to help diagnose lung cancer and recommend treatment.[82] In February 2013, IBM announced that Watson's first commercial application would be for utilization management decisions in lung cancer treatment at Memorial SloanKettering Cancer Center in conjunction with health insurance company WellPoint.[13] Utilization management is the evaluation of the appropriateness, medical need and efficiency of health care services procedures and facilities according to established criteria or guidelines and under the provisions of an applicable health benefits plan.

Anda mungkin juga menyukai