Anda di halaman 1dari 4

Top of Form

Silverlight Neural Networks

Submitted by jeffheaton on Wed, 09/16/2009 - 03:50 Silverlight is one of the platforms supported by Encog. Silverlight is a way to deploy traditional applications in a web browser. Sort of Microsoft's answer to flash. On this page you will see several Neural Network examples that make use of Silverlight and the Encog Artificial Intelligence Framework. These programs can be ran directly from your browser (Windows, Linux and Mac). This example shows several Encog techniques, such as neural networks, simulated annealing, genetic algorithms and resilient propagation training. Read below for more information on how each of these four examples works. Traveling Salesman The traveling salesman problem, or TSP, is a classic problem used in artificial intelligence (AI). Given a number of cities, attempt to find the shortest path through the cities, visiting each city only once. This sort of problem can be solved by traditional means-- if you have enough time. As the number of cities increases, the number of different combinations increases exponentially. It is an NP-Hard problem. Just ten cities produce over one million combinations. This example allows you to use Encog to produce a potential solution to the TSP using several means. A genetic algorithm simulates natural selection and allows a population of solutions to compete for the privilege to mate to create the next generation. This is an effective means of producing a solution. Simulated annealing can also be used. This simulates the metallurgical process of annealing. You can also find a solution using a neural network type called a Boltzmann machine. All three of these solutions are demonstrated by this demo. Handwriting Recognition This demo shows how you can use Encog to recognize characters. When this program first starts, Encog is ready to recognize digits. You can also load the alphabet (Latin characters). Alternatively, you can even draw your own characters for Encog to recognize. This demo uses a type of neural network called a Self-Organizing Map (SOM), also sometimes called a Kohonen neural network. As you draw characters into the rectangular area, they are downsampled into the grid that you see below. This forms the input into the neural network. The output of the neural network is the winning neuron that recognized the input. The number of output neurons matches the number of input samples you have.

Sunspot Prediction Sunspots are very periodic and relatively easy to predict. You can see a graph of sunspots displayed above. The black region of the graph is actual sunspot data, collected over many years. The blue region shows Encogs prediction. Initially, the prediction is pretty bad. However, train for a number of iterations and it will improve. This demo uses a feedforward neural network to predict the future number of sunspots. It is trained using resilient propagation (RPROP). As more iterations are completed the network becomes better at predicting the future sunspots. However, too much training can overfit, and decrease the effectiveness of the neural network. Simple Video Game The lunar lander is a classic computer game that has existed almost as long as computers themselves. A pilot must make a soft landing. The pilot has a limited amount of fuel. Once the fuel is exhausted, the lander will simply fall. Thrusters can be used to slow the fall, and even gain altitude. This example uses a feed forward neural network to fly the lander. The neural network is fed information about the current altitude, the amount of fuel, and the current velocity. The neural network outputs if it wants to thrust or not under these conditions. This network is trained using a genetic algorithm. An initial population of random networks is created to fly the lunar lander. They are all scored according to how softly they landed, how long they stayed up, and how much fuel they still had at the end. The best of the random networks mate to form the next generation. This process continues with each successive generation gaining some of the best characteristics of the generation before it. As this improves, the score increases. Once you see a score above 6,000 to 7,000 you should stop training and run the simulation. You will see the neural network graphically land the lander.

Encog Silverlight Benchmark Encog Training on the Go for Silverlight


Top of Form

Programming Contributions up Encog Silverlight Benchmark Search this site:


Search
form-64685a2d6f1 search_theme_form

Bottom of Form

Heaton Research

About Us Contact Courses Video Encog Project Matrix Math

Book Store

Downloads Wiki Forums

Stay Connected
RSS YouTube Twitter Facebook github LinkedIn
Top of Form

Important features for upcoming Encog versions? (these are the ones that are on my mind at the moment): OpenCL/CUDA/GPU Training that actually speeds things up Advanced SQL integration for both networks(and other ml method) and data Grid support (train using multiple computers) Direct integration with Encog/Encog Workbench and Ninja Trader/MTx/others More Machine Learning Methods (Decision Trees, Random Forest, etc) More training methods (i.e. cascade, ant colony, bee hive, etc)
Vote
form-2881bfa999d poll_view_voting

Bottom of Form

Featured Books

Encog Java and DotNet Neural Network Framework


Encog Articles General Information Helping with Encog Silverlight Neural Networks

Encog Silverlight Benchmark Encog Training on the Go for Silverlight

New forum topics


Finishing off Encog 3.1 NN book Workbench NullPointerException when creating PNN classification Documentation Issues Across networks. Cost sensitive learning use CSharpHTTPRecipes in my weblog 3D point sequence analysis with Encog Kohonen maps The Future of Encog GPU/OpenCL Usage

more Copyright 2005 - 2012 by Heaton Research, Inc.. Heaton Research and Encog are trademarks of Heaton Research. Click here for copyright, license and trademark information.

Anda mungkin juga menyukai