Anda di halaman 1dari 63

STEM

Volume 1

Science, Technology, Engineering, Mathematics

Ux|
Number 2 April 2012

TURING Was Right: On How the Leopard Got Its Spots

STEM
Vol. 1 No. 2

Science, Technology, Engineering, Mathematics

Ux|
Quarterly Bulletin of Jagan Nath University, Jaipur April 2012
Page No

Contents:
Green Computing for Green Building: A Brief Analysis of the Measures & Prospects: Ms. Meenu Dave & Prof. Y. S. Shishodia Continuously Varying Transmission: M.P. Singh 3-D without four eyes (3-D displays are trying to shed their spectacles): Sudhanshu Mathur Go Green With Green Building And By Green Construction Materials: Bharat Nagar Blending Technology The Real Pinnacle For 21 Century Learning Environment: Suraj Yadav Nano solar cells as an efficient source of renewable solar energy in green buildings: Pramod Kumar Difficulties in Teaching English in India and Importance of the Bilingual Method Dr. Preeti Bala Sharma Our Biotech Future: Dr Vikas Bishnoi 5G Wirelesses - The Next Step in Internet Technology: Sudarshan Kumar Jain Error control coding for next generation wireless system: M. L. Saini Physical characterizations of nano-materials by physical instrumentation: Pramod Kumar Energy consumption & performance improvements of Green cloud computing Mithilesh Kumar Dubey & Navin Kumar

4-9

10-15

16-17

18-22 23-28

29-36

37-40

41-44 45-46

47-48

49-55 56-62

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

STEM NEWS
TURING Was Right: on How the Leopard Got Its Spots
Alan Turing, best known as the father of modern Computer science (for his Turing Machine, Turing Test for Artificial Intelligence, Cryptography work during World War) in 1952 sketched out a biological model in which two chemicals an activator and an inhibitor could interact to form the basis for everything from the color patterns of a butterflys wings to the black and white stripes of a zebra, Spots on Leopards etc. IN 1952, in one of the most important papers in theoretical biology, Turing postulated that a chemical hypothesis for generation of coat patterns. He suggested that biological form follows a pre-pattern in the concentration of chemicals he called morphogenesis whose existence was not known at that time. Turing began with the assumption that morphogens can react with one another and diffuse through cells. He then employed a mathematical model to show that if morphogens react and diffuse in an appropriate way, spatial patterns of morphogen concentrations can arise from an initial uniform distribution in an assemblage of cells. Turing's model has spawned an entire class of models that are now referred to as reaction-diffusion models. In a typical reaction-diffusion model one starts with two morphogens that can react with each other and diffuse at varying rates. In the absence of diffusionin a well-stirred reaction, for examplethe two morphogens would react and reach a steady uniform state. If the morphogens are now allowed to diffuse at equal rates, any spatial variation from that steady state will be smoothed out. If, however, the diffusion rates are not equal, diffusion can be destabilizing: the reaction rates at any given point may not be able to adjust quickly enough to reach equilibrium. If the conditions are right, a small spatial disturbance can become unstable and a pattern begins to grow. Such instability is said to be diffusion driven. in reaction-diffusion models it is assumed that one of the morphogens is an activator that causes the mela-nocytes to produce one kind of melanin, say black, and the other is an inhibitor that results in the pigment cells' producing no melanin. Suppose the reactions are such that the activator increases its concentration locally and simultaneously generates the inhibitor. If the inhibitor diffuses faster than the activator, an island of high activator concentration will be created within a region of high inhibitor concentration. One can understand this phenomenon by taking analogy of FIRE FOREST. In an attempt to minimize potential damage, a number of fire fighters with helicopters and fire-fighting equipment have been dispersed throughout the forest. Now imagine that a fire (the activator) breaks out. A fire front starts to propagate outward. Initially there are not enough fire fighters (the Inhibitors) in the vicinity of the fire to put it out. Flying in their helicopters, however, the fire fighters can outrun the fire front and spray fire-resistant chemicals on trees; when the fire reaches the sprayed trees, it is extinguished. The front is stopped .If fires break out spontaneously in random parts of the forest, over the course of time several fire fronts (activation waves) will propagate outward. Each front in turn causes the fire fighters in their helicopters (inhibition waves) to travel out faster and quench the front at some distance ahead of the fire. The final result of this scenario is a forest with blackened patches of burned trees interspersed with patches of green, unburned trees. In effect, the outcome mimics the outcome of reaction-diffusion mechanisms that are diffusion driven. The type of pattern that results depends on the various parameters of the model and can be obtained from mathematical analysis. Harvard University researchers have now shown (2012) that Nodal and Lefty, the two proteins linked to regulation of asymmetry in invertebrates fit the model described by Turing in 1952. They have shown that the activator protein Nodal moves through the tissue far more slowly than its inhibitor Lefty. These proteins are clear examples of Turing Model in vivo.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

This Bulletin is a quarterly publication for dissemination of didactic information on Science, Technology. Engineering and Mathematics and related areas activities in Jagan Nath University and others. Tell Us What You Think: Please send your comments, observations and suggestions to Editor by email at pvc@jagannathuniversit y.org DISCLAIMER: Any views or opinions presented in this Bulletin either as Editorial or through Articles are solely those of the author(s) including the Editor and Jagan Nath University is in no way responsible for any infringement of Copyright or IPR. Readers are free to use the material included in this Bulletin. However they are expected to acknowledge it and inform the Editor.

EDITORIAL : EVERY ONE ENGINEER: ARE WE ENCOURIGING IT?


Children are born scientists and engineersthey are fascinated with building things, with taking things apart, and try to understand in their own way how things work. They also learn to assemble the things with trial and error. They try to understand the various processes and phenomena that they come across in their own way. These involve

all the Motor Sensory activities that of Hand, Mind, Physical, Movement and Reflexes of the child. This helps a child acquire key motor skills at the primary levels that form the foundation of their ability to navigate the world around them in a more holistic way. After sometime they become experts in doing these. They also use
the technique and knowledge so acquired, in handling deftly similar situations. They are curious and try to satisfy their curiosity by interacting with their friends, parents, relations and teachers. Now of course though internet and various sites such as Wikipedia, How Stuff Works etc. However the present educational set up at undergraduate and graduate levels have done little to develop the science, engineering and technology literacy of their students. The educational institutions, barring a few, are following the industrial model of student mass production. A broadcast is, by definition, the transmission of information from transmitter (teacher or instructor) to receiver (student) in a one-way, linear fashion. This way of teaching and learning may have been appropriate for a previous economy and generation, but increasingly it is failing to meet the needs for a new generation of students who are about to enter the global knowledge economy.

I am sure many of you would have pondered seriously over the present state of technical skills of the students and what can be done to make them better engineers and human beings who can make significant contributions to their profession and the country. The Editor would appreciate your opinion on this matter.
Prof. Y S SHISHODIA

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

Green Computing for Green Building: A Brief Analysis of the Measures & Prospects Ms. Meenu Dave1 & Prof. Y. S. Shishodia2 1 Assistant Professor, Department of Computer Science, Jagan Nath University, Jaipur, 2 Pro Vice-chancellors, Jagan Nath University, Jaipur

In the past few years, ecological and energy conservation issues have taken the center stage in the global economic arena. The reality of escalating energy costs coupled with the growing concern over the global warming and other environmental issues have shifted the social and economic focus of the business community. It is becoming more and more clear, that the way in which earthlings are behaving as a society is environmentally unsustainable, causing irreparable damage to the planet. The widely accepted truth about green house gas emissions as the chief contributing factor to global warming, governments and business corporations around the world are now concentrating on tackling environmental issues through adopting environment friendly practices. There are large amounts of materials used and energy consumed during the construction and operation of an average building. One of the growing areas of interest is the implementation of green technologies when constructing new facilities in order to produce buildings that are more energy efficient and have less impact on the natural environmentally during operation. A building which creates harmony with its environment and can function using an optimum amount of renewable energy, consume less water, conserve natural resources, generate less waste and create spaces for healthy and comfortable living, as compared to conventional buildings, is a Green Building. A Green Building is one which Uses maximum amount of natural lighting during day-time in order to reduce usage of conventional energy fuels. Solar energy is conserved by using photo-voltaic panels Passive light designs are used (with heat absorbing tiles and skylights) Use of Smart lighting, which adjusts the electrical lights according to the available natural light, thus lowering electricity requirements. Motion-sensitive lights that turn themselves off when the room is empty. Wind energy is utilized to regulate the temperature of rooms. Usage of low flow fixtures in bathrooms and kitchen to reduce the excess water consumption. Usage of BEE (Bureau of Energy Efficient) star labelled electrical appliances. Usage of locally found materials in the construction or as interior elements and operation of the building. This not only reduces pollution related to transportation but also helps the local economy Usage of low Volatile Organic Compounds (VOC) because these contain formaldehyde, urea formaldehyde, and urethanes which are hazardous to general health.
Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

Rainwater harvesting, storm water collection, rain garden etc should be followed to improve ground water level Waste water treatments like root zone treatment should be adopted. Root zone treatment is a simple eco-friendly process to filtrate the waste water from septic tanks into raw usable water, it can be used later in gardening and landscaping Solid wastes should be segregated into biodegradable and non-biodegradable before disposing. Usage of highly effective insulation, including green roof ( it lowers heating and cooling costs) Usage of natural gas to heat the building

Green computing or green IT, refers to environmentally sustainable computing or IT. It is the study and practice of designing, manufacturing, using, and disposing of ICT efficiently and effectively with minimal or no impact on the environment. Green IT also strives to achieve economic viability and improved system performance and use, while abiding by our social and ethical responsibilities. Thus, green IT includes the dimensions of environmental sustainability, the economics of energy efficiency, and the total cost of ownership, which includes the cost of disposal and recycling. The field of "green technology" encompasses a broad range of subjects from new energy-generation techniques to the study of advanced materials to be used in our daily life. Green technology focuses on reducing the environmental impact of industrial processes and innovative technologies caused by the Earths growing population. Green IT strategies aim towards Reducing the amount of pollutants present in the surroundings. It saves the power consumption and reduces amount of heat produced from the electronics. It reduces the burden on paper industry. Encourages the use of renewable resources. Promotes effective utilization of natural resources. Green computing promotes us to go green and along with that helps us to save green. The Green Grid is a global consortium of IT companies and professionals seeking to improve energy efficiency in data centres and business computing ecosystems around the globe. Board members of The Green Grid include AMD, EMC, Intel, APC, HP, Microsoft, Dell, IBM, and Oracle. In order to gain the environmental sustainability and efficient use of energy through computing there are four main paths to be taken. Green Use - Using the computers and other related products in an efficient manner where the energy consumption is minimized. Green Disposal - Reusing old computers, properly disposing and recycling other unwanted products. Green Design - Designing energy efficient and environmentally friendly computers and accessories. Green Manufacturing Manufacturing computers and other related equipments in a way that they have a minimal effect to the environment.
Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

These four steps can be further spanned in to number of activities and areas: Efficient use of energy Power saving Server virtualization Environmental sustainable designing Responsible disposal and recycle Risk mitigation Use of renewable energy sources Eco- labeling Use of Green methodologies and assessment tools. MEASURES OF GREEN COMPUTING Lower Power Hardware When, in 2005, Intel announced the new computing mantra to be "performance per watt" (rather than processor speed) green computing in general and lower power hardware in particular started to go mainstream. PCs can be made to use less electricity by using a lower power processor, opting for onboard graphics (rather than a separate graphics card), using passive cooling (rather than energy consuming fans), and either a solid state drive (SSD) in place of a spinning hard drive as the system disk, or else a 1.8" or 2.5" rather a than 3.5" conventional hard drive. Virtualization Virtualization enables the abstraction of computer resources so that two or more computer systems can run on one set of hardware. This capability enables organisations to realise significant benefits including: reducing the number of servers required to support computing needs reducing hardware support costs reducing hardware costs for disaster recovery reducing data centre power and cooling costs With a virtualized server consolidation a company can obtain a far more optimal use of computing resources by removing the idle server capacity that is usually spread across a sprawl of physical servers. Very significant energy savings can also result. IBM, for example, is currently engaged in its Project Big Green. This involves the replacement of about 2,900 individual servers with about 30 mainframes to achieve an expected 80 per cent energy saving over five years. To assist further with energy conservation, virtualization can take place at the level of files as well as servers. To permit this, file virtualization software is already available that will allocate files across physical disks based on their utilization rates (rather than on their logical volume location). This enables frequently accessed files to be stored on high-performance, low-capacity drives, whilst files in less common use are placed on more power-efficient, low-speed, larger capacity drives.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

Cloud Computing Cloud computing is where software applications, processing power, data and potentially even artificial intelligence are accessed over the Internet. Cloud computing has many benefits, one of which is enabling anybody to obtain the environmental benefits of virtualization. Whilst most servers in company data centres run at c.30 per cent capacity, most cloud vendor servers run at 80 per cent capacity or more. By choosing to cloud compute -- and in particular by adopting online computer processing power in the form of PaaS or IaaS -- companies may therefore potentially reduce their carbon footprint. The main advantage of cloud computing has a lot less to do with the technology but rather with its implementation. Cloud systems by design are decoupled from physical hardware, which offers the advantage of near instantaneous creation and destruction of a server (a virtual server, actually). Companies no longer have to scale to their anticipated max load, but rather run exactly the right amount of hardware. Energy Efficient Coding The principle behind energy efficient coding is to save power by getting software to make less use of the hardware, rather than continuing to run the same code on hardware that uses less power. Of course combining these two approaches can lead to even greater energy savings. Energy efficient coding may involve improving computational efficiency so that data is processed as quickly as possible and the processor can go into a lower power "idle" state. Alternatively or in addition, energy efficient coding may also involve data efficiency measures to ensure that thought is given in software design to where data is stored and how often it is accessed. Improved Repair, Re-Use, Recycling and Disposal Even better than more effective disposal is hardware repair, the recycling of old computer hardware into a second-use situation, the re-use of components from PCs beyond repair, and/or the less frequent upgrading of computer equipment in the first place. Personal computers are one of the most modular and hence the most repairable products purchased by individuals and organizations. Recycling of computers (which is expensive and time consuming at present) should be made more effective by recycling computer parts separately with a option of reuse or resale. Less Pollutant Manufacture A great many hazardous chemicals - including lead, mercury, cadmium, beryllium, brominated flame retardants (BFRs) and polyvinyl chloride (PVC) - are used to make computers. By reducing the use of such substances, hardware manufacturers could prevent people being exposed to them, as well as enabling more electronics waste to be safely recycled. This objective can also be achieved by replacing petroleum-filled plastic with bio-plastics plantbased polymers which require less oil and energy to produce than traditional plastics with a challenge to keep these bio-plastic computers cool so that electronics won't melt them. Powersucking displays can be replaced with green light displays made of OLEDs, or organic lightemitting diodes. Use of toxic materials like lead can be replaced by silver and copper. Whilst less pollutant computer manufacture is something that clearly needs to be undertaken by those
Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

companies who make the hardware in the first place, individuals and organizations can play an important role in their choice of new hardware. Both individuals and organizations are therefore in a position to influence the number of hazardous chemicals they purchase in the form of computing equipment. Computing and Sustainability There are three basic ways in which computer application can assist with reducing humanity's environmental impact. These comprise: Increasing business efficiency Dematerialization, and Travel reduction Microprocessors can increase business efficiency by enabling economies to scale in clean or at least cleaner ways, and by reducing the wastage of natural resources (for example through better logistics co-ordination so that goods are shipped a minimum number of times). Whilst computing equipment may be far more environmentally unfriendly in its manufacture, use and disposal than it could be, the productivity gains that it has allowed modern economies to make have already in part off-set what would have been an even larger growth in emissions. Dematerialization refers to the replacement of physical items or physically manipulative services with purely digital equivalents. Already music, video, computer software, tickets and a range of financial and business paperwork have started to become digital commodities. The environmental benefits of such a transformation can also be significant. For example, as Intel note, reading the news on a mobile computer results in the release of 32 to 140 times less carbon dioxide and other gases (including nitrogen and sulphur oxides) than consuming a hardcopy newspaper. People as well as goods can effectively also be dematerialized as and if computer application enables travel reduction. Most obviously, many face-to-face meetings (if granted not all faceto-face meetings) can now quite effectively be replaced with audio or video conferences. With many company resources (including e-mail, intranets and SaaS applications) now often available anytime, anywhere online, teleworking is also a highly resource-efficient possibility. Consumers haven't cared much about environmental impact when buying computers; their prime concern is features, speed and price. But with passage of time, consumers will become pickier about being green. Devices use less and less power while renewable energy gets more and more portable and effective. Research is carried out for developing new green materials every year, and many toxic ones are already being replaced by them. The greenest computer will not miraculously fall from the sky one day; itll be the product of years of improvements. The features of a green computer of tomorrow would be like: efficiency, manufacturing & materials, recyclability, service model, self-powering, and other trends. Green computer will be one of the major components of the green building which will be the future of a holistic green living. Adopting Green Computing Strategies is beneficial not only from an ecological stand-point, but also from a commercial point-of-view. There are many economic benefits achievable through the implementation of green computing such as cost savings, buoyancy, disaster recovery, business continuity planning, etc. Given the all pervading nature of IT in today's economy, green
Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

computing can play a decisive role in the fight against global warming, whilst enhancing the effectiveness and efficiency in the business operation. Thus it becomes the responsibility of every player in the IT field to work towards a green IT environment wholeheartedly and create a more sustainable environment.

REFERENCES Rebecca Brownstone, Western Engineering Green Building www.eng.uwo.ca/cmlp/Green_Build-ing_Draft_Report.pdf, July 2004 [2] B Krishnakumar Sharma, World Green Building Day, http://epao.net/epSubPageExtractor.asp?src=education.Science_and_Technology.World_Green_Building_Day, September 23,2011. [3] The Green Grid (2010) Retrieved from http://www.uh.edu/infotech/news/story.php?story_id=130. [4] Sarah Gingichashvili, "Green Computing", http://thefutureofthings.com/articles/1003/green-computing.html, November 19, 2007 [5] Priya Rana, Green Computing Saves Green, International Journal of Advanced Computer and Mathematical Sciences, Vol 1, Issue 1, Dec, 2010, pp 45-51. [6] Green Computing, Retrieved from http://www.towardsgreen.blogspot.in/, April 22, 2010 [7] Christopher Barnatt, Green Computing, http://www.explainingcomputers.com/green.html, August 6, 2011 [8] Ryan O Sullivan, Going green: The pros and cons of green computing, http://www.shoosmiths.co.uk/news/2289.asp, May 18, 2009 [9] S.S. Verma, Green computing, Science Tech Entrepreneur, http://www.technopreneur.net/infor-mation-desk/sciencetech-magazine/2007/nov07/Green%20Computing.pdf, November 2007 [10] John Basso, Cloud Computing is Green Computing, http://www.sdtimes.com/p/35070, December 13, 2010 [1]

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

Continuously Varying Transmission M.P.Singh Assistant Professor in Mechanical Engineering, Jagan Nath University, Jaipur ABSTRACT A CVT transmission operates by varying the working diameters of the two main the pulleys have V-shaped grooves in which the connecting belt rides. One side of the pulley is fixed; the other side is moveable, actuated by a hydraulic cylinder. When actuated, the cylinder can increase or reduce the amount of space between the two sides of the pulley. This allows the belt to ride lower or higher along the walls of the pulley, depending on driving conditions, thereby changing the gear ratio. If you think about it, the action is similar to the way a mountain bike shifts gears, by "derailing" the chain from one sprocket to the next except that, in the case of CVT, this action is infinitely variable, with no "steps" between. The "step less" nature of its design is CVT's biggest draw for automotive engineers. Because of this, a CVT can work to keep the engine in its optimum power range, thereby increasing efficiency and gas mileage. A CVT can convert every point on the engine's operating curve to a corresponding point on its own operating curve. With these advantages, it's easy to understand why manufacturers of high-mileage vehicles often incorporate CVT technology into their drive trains. Look for more CVTs in the coming years as the battle for improved gas mileage accelerates and technological advances further widen their functionality. CVT THEORY & DESIGN Todays automobiles almost exclusively use either a conventional manual or automatic transmission with multiple planetary gear sets that use integral clutches and bands to achieve discrete gear ratios . A typical automatic uses four or five such gears, while a manual normally employs five or six. The continuously variable transmission replaces discrete gear ratios with infinitely adjustable gearing through one of several basic CVT designs. Push Belt This most common type of CVT uses segmented steel blocks stacked on a steel ribbon, as shown in Figure (1). This belt transmits power between two conical pulleys, or sheaves, one fixed and one movable . With a belt drive:

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

10

In essence, a sensor reads the engine output and then electronically increases or decreases the distance between pulleys, and thus the tension of the drive belt. The continuously changing distance between the pulleystheir ratio to one anotheris analogous to shifting gears. Push-belt CVTs were first developed decades ago, but new advances in belt design have recently drawn the attention of automakers worldwide. Toroidal Traction-Drive These transmissions use the high shear strength of viscous fluids to transmit torque between an input torus and an output torus. As the movable torus slides linearly, the angle of a roller changes relative to shaft position, as seen in Figure (2). This results in a change in gear ratio .

Variable Diameter Elastomer Belt This type of CVT, as represented in Figure (2), uses a flat, flexible belt mounted on movable supports. These supports can change radius and thus gear ratio. However, the supports separate at high gear ratios to form a discontinuous gear path, as seen in Figure (3). This can lead to the problems with creep and slip that have plagued CVTs for years .

This inherent flaw has directed research and development toward push belt CVTs. Other CVT Varieties Several other types of CVTs have been developed over the course of automotive history, but these have become less prominent than push belt and toroidal CVTs. A nutating traction drive uses a pivoting, conical shaft to change gears in a CVT. As the cones change angle, the inlet radius decreases while the outlet radius increases, or vice versa, resulting in an infinitely variable gear ratio. A variable geometry CVT uses adjustable planetary gear-sets to change gear ratios, but this is more akin to a flexible traditional transmission than a conventional CVT. Challenges & Limitations

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

11

CVT development has progressed slowly for a variety of reasons, but much of the delay in development can be attributed to a lack of demand: conventional manual and automatic transmissions have long offered sufficient performance and fuel economy. Thus, problems encountered in CVT development usually stopped said progress. Designers have unsuccessfully tried to develop [a CVT] that can match the torque capacity, efficiency, size, weight, and manufacturing cost of step-ratio transmissions. One of the major complaints with previous CVTs has been slippage in the drive belt or rollers. This is caused by the lack of discrete gear teeth, which form a rigid mechanical connection between to gears; friction drives are inherently prone to slip, especially at high torque. With early CVTs of the 1950s and 1960s, engines equipped with CVTs would run at excessively high RPM trying to catch up to the slipping belt. This would occur any time the vehicle was accelerated from a stop at peak torque: For compressive belts, in the process of transmitting torque, micro slip occurs between the elements and the pulleys. This micro slip tends to increase sharply once the transmitted torque exceeds a certain value For many years, the simple solution to this problem has been to use CVTs only in cars with relatively low-torque engines. Another solution is to employ a torque converter (such as those used in conventional automatics), but this reduces the CVTs efficiency. Perhaps more than anything else, CVT development has been hindered by cost. Low volume and a lack of infrastructure have driven up manufacturing costs, which inevitably yield higher transmission prices. With increased development, most of these problems can be addressed simply by improvements in manufacturing techniques and materials processing. For example, Nissans Extroid is derived from a century-old concept, perfected by modern technology, metallurgy, chemistry, electronics, engineering, and precision manufacturing. RESEARCH & DEVELOPMENT While IC development has slowed in recent years as automobile manufacturers devote more resources to hybrid electric vehicles (HEVs) and fuel cell vehicles (FEVs), CVT research and development is expanding quickly. Even U.S. automakers, who have lagged in CVT research until recently, are unveiling new designs: The Japanese and Germans continue to lead the way in CVT development. Nissan has taken a dramatic step with its Extroid CVT, offered in the home-market Cedric and Gloria luxury sedans. This toroidal CVT costs more than a conventional belt-driven CVT, but Nissan expects the extra cost to be absorbed by the luxury cars prices. The Extroid uses a high viscosity fluid to transmit power between the disks and rollers, rather than metalto-metal contact. Coupled with a torque converter, this yields exceptionally fast ratio changes. Most importantly, though, the Extroid is available with a turbocharged version of Nissans 3.0 liter V6 producing 285 lb-ft of torque; this is a new record for CVT torque capacity.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

12

Many small cars have used CVTs in recent years, and many more will use them in the near future. Nissan, Honda, and Subaru currently use belt-drive CVTs developed with Dutch company Van Doorne Transmissie (VDT) in some of their smaller cars. Suzuki and Daihatsu are jointly developing CVTs with Japanese company Aichi Machine, using an aluminum/plastic composite belt reinforced with Aramid fibers. Their CVT uses an auxiliary transmission for starts to avoid low-speed slip. After about 6 mph, the CVT engages and operates as it normally would. The auxiliary gear trains direct coupling ensures sufficiently brisk takeoff and initial acceleration. However, Aichis CVT can only handle 52 lb-ft of torque. This alone effectively negates its potential for the U.S. market. Still, there are far more CVTs in production for 2000 than for 1999, and each major automobile show brings more announcements for new CVTs. New CVT Research As recently as 1997, CVT research focused on the basic issues of drive belt design and power transmission. Now, as belts by VDT and other companies become sufficiently efficient, research focuses primarily on control and implementation of CVTs. Nissan Motor Co. has been a leader in CVT research since the 1970s. A recent study analyzing the slip characteristics of a metal belt CVT resulted in a simulation method for slip limits and torque capabilities of CVTs. This has led to a dramatic improvement in drive belt technology, since CVTs can now be modeled and analyzed with computer simulations, resulting in faster development and more 8 efficient design. Nissans research on the torque limits of belt-drive CVTs has also led to the use of torque converters, which several companies have since implemented. The torque converter is designed to allow creep, the slow speed at which automatic transmission cars drive without driver-induced acceleration. The torque converter adds improved creep capability during idling for improved drive ability at very low speeds and easy launch on uphill grades. Nissans Extroid uses such a torque converter for smooth starting, vibration suppression, and creep characteristics. CVT control has recently come to the forefront of research; even a mechanically perfect CVT is worthless without an intelligent active control algorithm. Optimal CVT performance demands integrated control, such as the system developed by Nissan to obtain the demanded drive torque with optimum fuel economy. The control system determines the

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

13

necessary CVT ratio based on a target torque, vehicle speed, and desired fuel economy. Honda has also developed an integrated control algorithm for its CVTs, considering not only the engines thermal efficiency but also work loss from drive train accessories and the transmission itself. Testing of Hondas algorithm with a prototype vehicle resulted in a one percent fuel economy increase compared to a conventional algorithm. While not a dramatic increase, Honda claims that its algorithm is fundamentally sound, and thus will it become one of the basic technologies for the next generations power plant control. Although CVTs are currently in production, many control issues still amount to a tremendous number of trials and errors . One study focusing on numerical representation of power transmission showed that both block tilting and pulley deformation meaningfully effected the pulley thrust ratio between the driving and the driven pulleys . Thus, the resultant model of CVT performance can be used in future applications for transmission optimization. As more studies are conducted, fundamental research such as this will become the legacy of CVT design, and research can become more specialized as CVTs become more refined. As CVTs move from research and development to assembly line, manufacturing research becomes more important. CVTs require several crucial, high-tolerance components in order to function efficiently; Honda studied one of these, the pulley piston, in 1998. Honda found that prototype pistons experienced a drastic thickness reduction (32% at maximum) due to the conventional stretch forming method. A four-step forming process was developed to ensure a greater and more uniform thickness increase and thus greater efficiency and performance. Moreover, work-hardening during the forming process further increased the pulley pistons strength . Future Prospects for CVTs Much of the existing literature is quick to admit that the automotive industry lacks a broad knowledge base regarding CVTs. Where as conventional transmissions have been continuously refined and improved since the very start of the 20th century, CVT development is only just beginning. As infrastructure is built up along with said knowledge base, CVTs will become ever-more prominent in the automotive landscape. Even todays CVTs, which represent first-generation designs at best, outperform conventional transmissions. Automakers who fail to develop CVTs now, while the field is still in its infancy, risk being left behind as CVT development and implementation continues its exponential growth. CVTs & Hybrid Electric Vehicles While CVTs will help to prolong the viability of internal combustion engines, CVTs themselves will certainly not fade if and when IC does. Several companies are currently studying implementation of CVTs with HEVs. Nissan recently developed an HEV with fuel efficiency more than double that of existing vehicles in the same class of driving performance. The electric motor avoids the low speed/ high torque problems often associated with CVTs, through an innovative double-motor system. At low speeds: A low-power traction motor is used as a substitute mechanism to accomplish the functions of launch and forward/reverse shift. This has made it possible to discontinue use of a torque converter as the launch element and a planetary gear set and wet multi plate clutches as the shift mechanism.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

14

Thus use of a CVT in a HEV is optimal: the electric portion of the power system avoids the low-speed problems of CVTs, while still retaining the fuel efficiency and power transmission benefits at high speeds. Moreover, the use of a CVT capable of handling high engine torque allows the system to be applied to more powerful vehicles. Obviously, automakers cannot develop individual transmissions for each car they sell; rather, a few robust, versatile CVTs must be able to handle a wide range of vehicles. CONCLUSION Today, only a handful of cars worldwide make use of CVTs, but the applications and benefits of continuously variable transmissions can only increase based on todays research and development. As automakers continue to develop CVTs, more and more vehicle lines will begin to use them. As development continues, fuel efficiency and performance benefits will inevitably increase; this will lead to increased sales of CVT-equipped vehicles. Increased sales will prompt further development and implementation, and the cycle will repeat ad infinitum. Moreover, increasing development will foster competition among manufacturersautomakers from Japan, Europe, and the U.S. are already either using or developing CVTswhich will in turn lower manufacturing costs. Any technology with inherent benefits will eventually reach fruition; the CVT has only just begun to blossom.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

15

3-D without four eyes (3-D displays are trying to shed their spectacles) Sudhanshu Mathur Assistant Professor in Electronics Engineering, Jagan Nath University, Jaipur

New glasses free 3-D devices are about to hit the market, and their backers are hoping theyll make 3-D spectacles as obsolete as smell-o-vision. These gadgets are called as autostereo. This autostereo will include not only 3D game consoles, but also cameras, cell phones, and tablet computers. Among the first will be autostereo 3D-TVs, just now hitting stores in Japan, and Nintendos 3DS handheld games console, due for release worldwide early 2012. This technique is very necessary because according to an American survey, a quarter of gamers got headaches from 3-D, few of them complained of eyestrain and remaining felt disoriented or dizzy after playing. In a similar survey of 2000 Americans by the market research firm NPD Group, over half said that having to wear glasses would discourage them from upgrading to 3D altogether. Moreover, the glasses arent cheap. High-tech 3-D specs cost US$100 or more. Now let us understand the concept of autostereo. To perceive three dimensions, a persons eye must see different, slightly unaligned images. In the real world, spacing between the eyes makes that happen naturally. On a video screen, its not so simple; one display somehow has to present a different and separate view to each eye. Some systems handle this challenge by interspersing the left and right views. This is called as Multiplexing. Some of them use alternate left and right view, called sequencing. Whatever the approach, the displays then use optical or technological tricks to direct the correct view to

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

16

the correct eye.

For example, the glasses used with currently available 3-D TVs are active shutter glasses. They contain a set of miniature LCD panels that synchronize with the large LCD screen in the TV. When the main screen is showing an image destined for your right eye, a liquid-crystal shutter in the left lens of the glasses make that lens opaque, and vice-versa. This sequential system switches between images meant for each eye dozens of times a second, creating a smooth 3-D effect. The first to swear off glasses was Nintendo by announcing the 3DS console, an autostereo handheld gaming device. It has two in-built screens: one touch-sensitive but limited to 2-D, the other a 3.5-inch display with the 3-D effect. The Nintendo 3DSs autostereo screen, made by Sharp, uses a multiplexed parallax barrier technology. This method lays a second layer of liquid crystals next to a traditional LCD and its backlight. This extra layer creates thin vertical strips that block some of the light and direct the remaining light alternately to the left and right eyes, creating a 3-D effect for a single viewer at a set distance, usually around 30cm. Researchers have also experimented with autostereo displays that generate multiple set of 3-D images, either to accommodate several viewers simultaneously or to reduce flip-flopping effect when your head moves relative to the screen. Today, the Nintendo 3DS and its high profile games are hatching chickens and laying eggs simultaneously; the industry could finally be gearing up for a handheld 3-D revolution. Glasses-free (and headache-free) 3-D could be the new must-have upgrade for cell phones- like GPS location, digital photography, and music playing before it. Industry research association predicts that by 2018, mobile devices will have leapfrogged televisions to become the most popular 3-D gadgets.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

17

Go Green with Green Building and By Green Construction Materials Bharat Nagar Astt. Professor, Civil Engineering Department, Jagan Nath University, Jaipur.

WHAT IS A GREEN BUILDING? Green building refers to a structure and using process that is environmentally responsible and resource efficient throughout a buildings life cycle: from siting to design, construction, operation, maintenance, renovation, and demolition. This practice expands and complements the classical building design concerns of economy, utility, durability, and comfort. A green building, also known as a sustainable building, is a structure that is designed, built, renovated, operated, or reused in an ecological and resource-efficient manner. Green buildings are designed to meet certain objectives such as protecting occupant health; improving employee productivity; using energy, water, and other resources more efficiently; and reducing the overall impact to the environment. The concepts about green architecture can generally be organized into several areas of application. These areas include sustainability, materials, energy efficiency, land use, and waste reduction. Green buildings are not only be designed for a present use, but consideration is also be given to future uses as well. An adaptable structure can be "recycled" many times over the course of its useful life. If specific technical issues prevent use of the building for a new function, then the materials used in its construction are designed to facilitate ease of recycling and reprocessing of materials. Green technology is an approach to building which has become more prevalent in the last 25 to 30 years. Also known as sustainable design, green architecture is simply a method of design that minimizes the impact of building on the environment. Once thought of as unconventional and nonstandard, green architecture is quickly becoming accepted by both regulatory agencies and the public alike as a socially responsible and logical means of construction

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

18

There are three main areas in green building: MATERIALS REDUCED ENERGY USE AND REDUCED WASTE. WHAT IS THE NEED OF GREEN BUILDING Here are ten things (in no particular order) that green buildings are needed in different parts of the world: 1. Green buildings can command rents as much as 10% above the norm. Niche markets are already turning mainstream, demanding low-impact buildings. In Australia, only a few short years after introduction of the Green Star rating system for buildings, virtually every new office built achieves a Green Star rating. The definition of 'Class A' office space has been redefined, entirely on the strength of a voluntary transformation of the building industry in response to tenant demand. 2. Green buildings improve productivity. Studies in the US have shown this to be true in a number of different ways. Not only do office workers enjoy their working environment more - taking fewer sick days and reporting fewer minor ailments - but green factories show fewer injuries, green retailers sell more products, and green hospitals discharge patients sooner. 3. Green buildings show respect for the people who use them. Probably nowhere is this more important than in schools. If the education system provides children with healthier, more pleasant schools, pupils will understand that they are valued and will be more open to treating their environment with respect. Whether it's because of this, or just because they feel better in a healthy environment, studies have shown that children can achieve better results at green schools. 4. Green buildings raise the quality and standard of buildings generally. In many countries, the typical office building is not built in compliance with standard building codes. A

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

19

green building rating forces a developer to show that the new building not only meets, but exceeds municipal codes. And as green buildings become more common, this places pressure on others to compete at this higher level. 5. Green buildings inspire innovation. Rating systems generally don't prescribe what technology should be incorporated in a building, they set performance standards with regard to reduced environmental impact, and leave it up to the developer to decide how to meet these standards. The best first step is to design buildings so that they meet requirements - especially for air quality and energy consumption - through passive systems that don't require mechanical equipment. Only then should equipment be used to achieve what the passive design cannot, by using efficient systems. And significant efficiency demands consideration of the building as a whole, and the impact of the various design decisions on each other. A building is very unlikely to achieve the highest green rating if design is not approached holistically; and when the professional team starts to think this way, innovations often emerge. Interestingly, while big buildings can benefit from economies of scale with green systems, small buildings are sometimes the more innovative, as they are sometimes able to do things like recycling all of their construction waste. Innovation takes the industry forward, raising the bar for the next wave of developers. 6. Green buildings encourage learning about what works and what doesn't. The evolution of the building industry has been slow in the past, but the green revolution is accelerating change in design approach, building methods, the choice of materials, and the manufacture of building materials. Mistakes will be made, but presenters at the conference hammered home the point: the industry must publish performance results so that we move in the right direction. The same is true of the industries that supply the building industry. Just as a bottle of milk might be certified 'organic', so too the materials that go into a building need to be rated for their performance on measures such as water and energy consumption, and carbon emissions. Rating systems by their nature steadily push the industry to its limits (Green Star deliberately targets the top 25% of new buildings). And an important part of a green building is the incorporation of systems that monitor performance, so that the information is there as a tool for building managers to ensure optimum performance. This same information can show us what strategies will achieve positive results. It's a brave owner of a green building who admits having made mistakes, but such honesty is made easier by the knowledge that earlier buildings inevitably will not perform as well as later buildings. 7. Green buildings can help electricity utilities by reducing peak demand. Energy-efficient buildings don't only reduce emissions overall (in both their operation and initial construction), they also help smooth the peaks in demand. And in a growing number of cases in Australia and the US, they are net exporters of electricity using co-generation and tri-generation. We will know that the building industry is really making a significant impact when the need for a new coal-fired power station is removed.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

20

8. Green buildings raise awareness of what constitutes a high quality environment. By setting out very specific performance targets, green building rating systems make it clear how the indoor environment can be improved over standard-issue buildings. People who choose to buy or rent buildings often can't articulate what it is that they value in a building, or what qualities they look for. Indeed, in the early stages of the transformation of the industry, this is a challenge for creating acceptance that green buildings are worth paying extra for. But the rating systems provide that articulation, and it's just a matter of raising awareness. 9. Green buildings can trade energy. The idea has been suggested - even here in South Africa - that as buildings begin to develop new ways of saving and generating energy, there may be scope for an energy market, similar to a carbon market but on a more local scale. Some buildings will never be able to be self-reliant in energy terms, while others may generate a surplus. Trading is the logical response in an environment where energy savings are an imperative, as they are in South Africa right now. If a particular building owner is unable to meet an externally-set target of 10% reduction in electricity consumption, he or she could trade electricity 'credits'. 10. Green buildings present exciting new challenges for environmental stewardship. Is it enough to be 'efficient', or even 'sustainable'? If we really think about it, that sounds like a low target to be setting ourselves. People who are working towards the next generation of rating tools are thinking about how to take buildings to a new level. Terms like 'restorative' and 'living buildings' are starting to emerge, suggesting that buildings could do better than just zero environmental damage. They could begin to compensate for damage caused in other sectors, by being 'carbon negative' or making a positive contribution to the environment, rather than being merely benign GREEN BUILDING MATERIALS Green building materials are composed of renewable, rather than non-renewable resources. Green materials are environmentally responsible because impacts are considered over the life of the product. Green building materials offer specific benefits to the building owner and building occupants:

Reduced maintenance/replacement costs over the life of the building. Energy conservation. Improved occupant health and productivity. Lower costs associated with changing space configurations. Greater design flexibility.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

21

LIST OF GREEN BUILDING MATERIALS:-

1. 2. 3. 4. 5. 6. 7. 8. 9.

Bamboo, Bamboo Based Particle Board & Ply Board, Bamboo Matting, Bricks sun dried, Precast cement concrete blocks, lintels, slab. Structural and non-structural modular elements, Calcined Phospho Gypsum Wall Panels, Calcium silicate boards and Tiles, Cellular Light Weight Concrete Blocks Cement Paint Clay roofing tiles Water, polyurethane and acrylic based chemical admixtures for corrosion removal, rust prevention, water proofing

10. Epoxy Resin System, Flooring, sealants, adhesives and admixtures 11. Ferro-cement boards for door and window shutters 12. Ferro-cement Roofing Channels 13. Fly-ash Sand Lime Bricks and Paver Blocks 14. Gypsum Board, Tiles, Plaster, Blocks, gypsum plaster fibrejute/sisal and glass fibre composites 15. Laminated Wood Plastic Components 16. Marble Mosaic Tiles 17. MDF Boards and Mouldings 18. Micro Concrete Roofing Tiles 19. Partical Board 20. Polymerised water proof compound

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

22

Blending Technology The Real Pinnacle For 21 Century Learning Environment Suraj Yadav Department of Computer Science / Information Technology, Jagan Nath University, Jaipur AbstractThe educational technology based on computer network becoming popular worldwide because of new inventions in network. E-learning has altered, and will continue to affect teaching and learning contexts in tertiary education. E-learning is one of the fastest growing areas of the high technology sector. A blended learning is a new idea and method for teaching and learning reform. Blended learning is replacing e-learning as the next big thing. Blended learning solves the problem of speed, scale, and impact and leverages e-learning where its most appropriate, without forcing e-learning into places it does not fit.

I. INTRODUCTION OF E-LEARNING
Nowadays, web2.0-typified Internet has been increasingly effect people's work, study and lives, especially for younger generation called Digital Natives, who use computing terminals almost every day, such as computers, smart phones, and conduct interpersonal interaction in a virtual world with e-mail and instant message. Web2.0.is majorly a kind of Internet application form featuring users creating contents, paying attention to gathering collective wisdoms and users experience, with technologies of RSS(Atom/Jason), Tag and Ajax as basis and BIog, Wiki, Social Networks and Social Bookmarking. Internet representative digital experience has become an important part of young people's lives. Instructors need to change teaching from respects of teaching methods, resource publishing and the learning supporting services, utilize the ubiquitous resources in digital lives to enhance learners' learning efficiency. E-learning is one of the fastest growing areas of high technology sector. It involves the use of ICT such as e-mail, the internet, audios/videos, CD-ROMS,DVDs, videoconferencing, mobile, television, and satellite broadcasting. The use of ICT can remove time and place constraints on teaching and learning to provide the flexibility that many tertiary students are now demanding.

II.

CHALLENGES OF E-LEARNING

As E-learning is one of the fastest growing areas but it has some disadvantages and challenges that affect growth of the E-learning. An easy way to comply with the conference paper formatting requirements is to use this document as a template and simply type your text into it. Lack of customization to students interest (also length instead of modules), Lack of student motivation, Lack of personal community and connection (not blended learning), Its a banking model of education (which is partially inevitable), Not experientially basedits simulation based at best, Not necessary based on the best science regarding, Lack of quality assessment and feedback, which hinders learning., Mostly disconnected to the needs of employers, which means its disconnected from the desires of students and parents. (This may be the largest criticism), some self-directed learners is sometimes too random and has no process (its too loosely joinedsometimes you need a bridge or a path). Also, some is subject to quality issues. The learner has to self-analyze content without requisite knowledge or criteria (its authority 2.0),

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

23

Lack of certification (or assessment) for self-directed learning, Tech, toys, and teaching over learning, Focus on memorization over learning core competencies, Time resources at a minimum (Tradeoff w/ NCLB on the high school level. And NCLB cuts into the arts in time, funding, and resources.) But some teachers dont know how much time they have, Lack of mentorship for self-learners and even some just the facts maam distance learning, Lack of adoption to learning style of learners. (e-learning just textbooks in drag), Better aligning of incentives of teachers and learners (?), Downtime + mobile as well as play are issues to consider as well, Lack of digital literacy and keeping up with the pace of change and many more are also present as per desire to one.

III.

INTRODUCTION OF BLENDED TECHNOLOGY

Blended Learning is really the natural evolution of e-learning into an integrated program of multiple media types, applied toward a business problem in an optimum way, to solve a business problem. Blended Learning can be described as a learning program where more than one delivery mode is being used with the objective of optimizing the learning outcome and cost of program delivery. However, it is not the mixing and matching of different learning delivery modes by itself that is of significance, but the focus on the learning and business outcome. Blended learning focuses on optimizing achievement of learning objectives by applying the right learning technologies to match the right personal learning style to transfer the right skills to the right person at the right time. Embedded in this definition are the following principles: We are focusing on the learning objective rather than the method of delivery. Many different personal learning styles need to be supported to reach broad audiences. Each of us brings different knowledge into the learning experience. In many cases, the most effective learning strategy is just-what-I-need, just-in-time The experience of pioneers in blended learning shows that putting these principles into practice can result in radical improvements in the effectiveness, reach and cost-effectiveness of learning programs relative to traditional approaches. These improvements are so profound that they have the potential to change the overall competitiveness of entire organizations.

IV.

BLESS MODEL

The Blended Learning Systems Structure (BLESS) model addresses both of these dimensions by considering their reciprocal influences: On the one hand, learning technology provides new, enhanced means of learning support, while on the other hand didactics have to be reconsidered accordingly to make situated and targeted use of learning technology.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

24

Figure 1 THE BLESS MODEL

As depicted in Figure 1, the gap between these two worlds is closed by a conceptual system of layers and their respective transitions: In brief, concrete blended learning courses (layer 1) are visualized and modeled conceptually as UML activity diagrams (layer 2). These diagrams are decomposed into (or expressed in terms of) self-contained, reusable didactical scenarios the blended learning patterns (layer 3). Subsequently, the Web template layer (layer 4) shows how to support these patterns on learning technology systems. Here starts the learning-platform dependent part of the BLESS model, as the transition to the technology layer has to define how the Web templates are instantiated and implemented on top of a concrete learning platform (layer 5).

V.

DIMENSIONS OF BLEND

The original use of the phrase Blended Learning was often associated with simply linking traditional classroom training to eLearning activities. However, the term has evolved to encompass a much richer set of learning strategy dimensions. Today a blended learning program may combine one or more of the following dimensions, although many of these have over-lapping attributes. 1) Blending Offline and Online Learning At the simplest level, a blended leaning experien combines offline and online forms of learning where the online learning usually means over the Internet or intranet, and offline learning happens in a more traditional classroom setting. We assume that even the offline learning offerings are managed through an online learning system. An example of this type of blending may include a learning program that provides

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

25

study materials and research resources over the Web while providing instructor-led, classroom training sessions as the main medium of instruction. 2) Blending Self-Paced and Live, Collaborative Learning Self-paced learning implies solitary, on-demand learning at a pace that is managed or controlled by the learner. Collaborative learning on the other hand implies a more dynamic communication among many learners that brings about knowledge sharing. The blending of self-paced and collaborative learning may include review of important literature on a regulatory change or new product followed by a moderated; live online, peer-to-peer discussion of the materials application to the learners job and customers. 3) Blending Structured and Unstructured Learning Not all forms of learning imply a pre-meditated, structured or formal learning program with organized content in specific sequence like chapters in a text book. In fact, most learning in the workplace occurs in an unstructured form such as meetings, hallway conversations, and e-mail. A blended program design may look to capture active conversations and documents from unstructured learning events into knowledge repositories available on-demand, supporting the way knowledge-workers collaborate and work. 4) Blending Custom Content with Off-the-Shelf Content Off-the-shelf content is by definition generic unaware of your organizations unique context and requirements. However, generic content is much less expensive to buy and frequently has higher production values than custom content you build yourself. Generic, self-paced content can be customized today with a blend of live experiences (classroom or online) or through content customization. Industry standards such as SCORM (Shareable Courseware Object Reference Model) open the door to greater flexibility in blending off-the-shelf and custom content improving the user experience while minimizing cost. 5) Blending Work and Learning Ultimately, the true success and effectiveness of learning in organizations is believed to be associated with the paradigm where work (such as business applications) and learning are inseparable, and where learning is embedded in business processes such as hiring, sales, or product development. Work becomes a source of learning content to be shared and more learning content becomes accessible on-demand and in the context of the users workplace need.

VI.

HOW TO SELECT BLEND

To make blended learning more powerful, you can start looking at all the media as options: classroom training, web-based training, webinars, CD-ROM courses, video, EPSS systems, and simulations. Other media which is less exciting but just as important includes books, job aids, conference calls, documents, and PowerPoint slides. The highest impact programs blend a more complex media with one or more of the simpler media. A web-based course for introduction followed by a real hands-on interactive class is an obvious mix.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

26

Media Type

How to

Instructional Value High High

Scalability Low High

Time to Develop 3-6 weeks 4-20 weeks

Cost to Develop Medium High

Cost to Deploy High Low

Assessment Capable Medium High

Trackable Low High

Design and Architect


your

Classroom based training WBT Courseware CD ROM Courseware Conference Calls Webinars Software / Online Simulations Lab-based Simulations Job Aids Web Pages Web Sites Mentors ChatDiscussionCommunity Services Video (VCR or Online) EPSS

High

High

6-20 weeks

High

Medium

High

Low

Blended Learning
Program

Low

Medium

0-2 weeks

Low

Low

No

No

Medium Very High

Medium Medium

3-6 weeks 8-20 weeks

Low High

Medium Medium

Low High

Low High

Very High Low Low Low Medium Medium

Low High High High Low Low Medium

3-6 weeks 0-3 weeks 1-8 weeks 1-8 weeks 2-3 weeks 4-6 weeks

High Low Low Low High Medium

High Low Low Low High Medium

Medium None None None Low None

Medium None None None Low Low

Media Selection Guide

High Medium

Medium Medium

6-20 weeks 8-20 weeks

High Medium

High Medium

None None

Low Medium

Blended Learning: What Works Checklist page 3

Bersin & Associates, 2003 www.bersin.com

VII. BENEFITS OF BLENDING


The concept of Blended Learning is rooted in the idea that learning is not just a one-time event but that learning is a continuous process. Blending provides various benefits over using any single learning delivery type alone: 1) Improved Learning Effectiveness Recent studies at the University of Tennessee and Stanford give us evidence that a blended learning strategy actually improves learning outcomes by providing a better match between how a learner wants to learn and the learning program that is offered. 2) Extending the Reach A single delivery mode inevitably limits the reach of a learning program or critical knowledge transfer in some form or fashion. For example, a physical classroom-training program limits access to only those who can participate at a fixed time and location, whereas a virtual classroom event is inclusive of a remote audience, and when followed up with recorded knowledge objects (ability to playback a recorded live event), can extend the reach to those who could not attend at a specific time.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

27

3) Optimizing Development Cost and Time Combining different delivery modes has the potential to balance out and optimize the learning program development and deployment cost and time. A hundred percent online, self-paced, media-rich, Web-based training content may be too expensive to produce (requiring multiple resources and skills), but combining virtual collaborative learning forums and coaching sessions with simpler self-paced materials such as generic off-the-shelf WBT, documents, case studies, recorded live eLearning events, text assignments, and PowerPoint presentations (requiring quicker turn-around time and lower skill to produce), may be just as effective or more effective. 4) Optimizing Business Results Organizations report exceptional results from their initial blended learning initiatives. Learning objectives can be obtained in 50 % less class time than traditional strategies. Travel costs and time have been reduced by up to 85%. Acceleration of mission-critical knowledge to channels and customers can have a profound impact on the organizations top line.

VIII.

CONCLUSIONS

Organizations are rapidly discovering that blended learning is not only more time and cost effective, but provides a more natural way to learn and work. Organizations that are in the forefront of this next generation of learning will have more productive staffs, be more agile in implementing change, and be more successful in achieving their goals. Organizations must look beyond the traditional boundaries of classroom instruction by augmenting their current best practices with new advances in learning and collaboration technologies to maximize results. More importantly, organizations must seek to empower every individual in the organization to become an active participant in the learning and collaboration process.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

28

Nano solar cells as an efficient source of renewable solar energy in green buildings Pramod Kumar Dept, of Physics, Faculty of Engineering, Jagan Nath University,

Abstract: As we know that Green building requires efficient use of energy, water and other sources. Solar cells are providing efficient energy to buildings since long back. The Sun is a massive reservoir of clean energy. This energy can be harnessed by solar cells. Conventional solar cells are very popular since long back. This energy is also known as renewable solar energy. But in recent days, A very prominent technology in solar cells has been emerged known as nanosolar technology. Today, solar cell technology is in limited use due to the relatively high manufacturing cost of silicon based technology, and the low power efficiency of organic polymer based solar cells technology. However, research is being done solar cells based comprised of Nanomaterials are most efficient than conventional solar cells. These cells known as nano solar cells, with improved efficiency. This paper will explore about the nanosolar cells and conventional solar cells. From this it can be concluded that nono solar cells will be very effective for the energy requirement of the green buildings.

Introduction: Humanitys top ten problems for next 100 years will be energy, water, food, environment, poverty, terrorism & war, disease, education, democracy and population. Increased population will put extra thrust on many issues of social and economics. The demand of safe, clean energy and better rehabilitation is continuously increasing. The demand of energy will increase 25 TW by 2050 ( Source: EIA Intel energy outlook 2004) . This demand cannot be fulfilled by the present sources of energy. In this critical situation, new types of research become very important for the energy, fooding and safe living.

The concept of green building has evolved for solving safe living and energy problem of the humanities. A green building as shown in the figure 1, also known as a sustainable building, is a structure that is designed, built, renovated, operated, or reused in an ecological and resourceefficient manner. Green buildings are designed to meet certain objectives such as protecting occupant health; improving employee productivity; using energy, water, and other resources
Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

29

more efficiently; and reducing the overall impact to the environment [1]. The common objective is that green buildings are designed to reduce the overall impact of the built environment on human health and the natural environment by Efficiently using energy, water, and other resources; Protecting occupant health and improving employee productivity ,Reducing waste, pollution and environmental degradation [2].

Figure 1. Green Building For the energy requirements in the green buildings, solar energy is one the best non conventional energy source. As we know that solar energy is the most readily available source of energy. It does not belong to anybody and is, therefore, free. It is also the most important of the nonconventional sources of energy because it is non-polluting and, therefore, helps in lessening the greenhouse effect. The next few years it is expected that millions of households in the world will be using solar energy as the trends in USA and Japan show. In India too, the Indian Renewable Energy Development Agency and the Ministry of Non-Conventional Energy Sources are formulating a programme to have solar energy in more than a million households in the next few years. India receives solar energy equivalent to over 5000 trillion kWh/year, which is far more than the total energy consumption of the country [3]. India is one of the few countries with long
Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

30

days and plenty of sunshine, especially in the desert region. This zone, having abundant solar energy available, is suitable for harnessing solar energy for a number of applications. In areas with similar intensity of solar radiation, solar energy could be easily harnessed. Solar thermal energy is being used in India for heating water for both industrial and domestic purposes [3]. The solar energy can be converted in to electricity by photovoltaic cells or general ranking cycle of power plant. In homes photovoltaic cells are used for electricity conversion. Current solar power technology has little chance to compete with fossil fuels or large electric grids. Todays solar cells are simply not efficient enough and are currently too expensive to manufacture for large-scale electricity generation [4]. However, potential advancements in nanotechnology may open the door to the production of cheaper and slightly more efficient solar cells. First, I would like to examine the current solar cell technologies available and then look at their drawbacks. Then I will explore the research field of nano solar cells, and the science behind them. Conventional Solar Cells: The solar cells, is also known as Photovoltaic cell (PV cell) is A device that converts light energy (solar energy) directly to electricity. The term solar cell is designated to capture energy from sunlight, whereas PV cell is referred to an unspecified light source. It is like a battery because it supplies DC power. It is not like a battery because the voltage supplied by the cell changes with changes in the resistance of the load [5]. These cells are made out of semiconducting material, usually silicon. A conventional solar cell is shown in figure 2.

Figure 2. Conventional solar cell

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

31

When light hits the cells, they absorb energy though photons. This absorbed energy knocks out electrons in the silicon, allowing them to flow. By adding different impurities to the silicon such as phosphorus or boron, an electric field can be established. This electric field acts as a diode, because it only allows electrons to flow in one direction consequently, the end result is a current of electrons, better known to us as electricity [6]. Conventional solar cells have two main drawbacks: they can only achieve efficiencies around ten percent and they are expensive to manufacture. The first drawback, inefficiency, is almost unavoidable with silicon cells. This is because the incoming photons, or light, must have the right energy, called the band gap energy, to knock out an electron. If the photon has less energy than the band gap energy then it will pass through. If it has more energy than the band gap, then that extra energy will be wasted as heat. Scott Aldous, an engineer for the North Carolina Solar Center explains that, These two effects alone account for the loss of around 70 percent of the radiation energy incident on the cell[6]. Consequently, according to the Lawrence Berkeley National Laboratory, the maximum efficiency achieved today is only around 25 percent [7]. Mass-produced solar cells are much less efficient than this, and usually achieve only ten percent efficiency. Nano Solar Cells: Nanotechnology might be able to increase the efficiency of solar cells, but the most promising application of nanotechnology is the reduction of manufacturing cost. Chemists at the University of California, Berkeley, have discovered a way to make cheap plastic solar cells that could be painted on almost any surface. These new plastic solar cells achieve efficiencies of only 1.7 percent; however, Paul Alivisatos, a professor of chemistry at UC Berkeley states, "This technology has the potential to do a lot better. There is a pretty clear path for us to take to make this perform much better[8]. These new plastic solar cells utilize tiny nanorods dispersed Diagram of a nano solar cell within in a polymer. The diagram of a nanosolar cells is shown in figure 3.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

32

Figure 3. Nano Solar Cells

Despite the many potential uses and ways to include nanostructures in photovoltaic devices, these solar cells share several issues and challenges. The most basic issue is that the device design rules for nanostructure solar cells do not exist, and thus many choices or design parameters do not have sufficient theoretical or experimental guidance [9]. The major problems are shown in the figure 4.

Figure 4. Design problems in nanosolar sells

Working of a Nano Solar Cells: The nanorods behave as wires because when they absorb light of a specific wavelength they generate electrons. These electrons flow through the nanorods until they reach the aluminum
Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

33

electrode where they are combined to form a current and are used as electricity this type of cell is cheaper to manufacture than conventional ones for two main reasons. First, these plastic cells are not made from silicon, which can be very expensive. Second, manufacturing of these cells does not require expensive equipment such as clean rooms or vacuum chambers like conventional silicon based solar cells. Instead, these plastic cells can be manufactured in a beaker [10]. A schematic of working of nanosolar cells having quantum dots is shown in figure 4.

Figure 4. working of nano solar cells

UC Berkeley graduate student Wendy Huynh says, We use a much dirtier process, and that makes it cheap[8]. Another potential feature of these solar cells is that the nanorods could be tuned to absorb various wavelengths of light. This could significantly increase the efficiency of the solar cell because more of the incident light could be utilized. According to a 2001 report, The Societal Implications of Nanoscience and Nanotechnology, by the National Science Foundation, if the efficiency of photovoltaic cells was improved by a factor of two uses nanotechnology, The role of solar energy would grow substantially. In addition to the University of California Berkeley, a well-known company named Konarka Technologies is also pursuing the use of nanotechnology to improve solar energy. In fact, they are already manufacturing a product called, Power Plastic which absorbs both sunlight and indoor light and converts it into electricity. For patent reasons, their technology is kept secret, but the basic
Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

34

concept is that Power Plastic is made using nanoscale titanium dioxide particles coated in photovoltaic dyes, which generate electricity when they absorb light. According to Engineer Magazine, Konarka has already, built fully functional solar cells that have achieved efficiencies of around 8%. Future designs are already underway which includes tuning the nanorods to absorb certain wavelengths of light in order to exploit a greater range of the color spectrum. Improvements such as this could make it possible to manufacture inexpensive solar cells with the same efficiency as current technology.

Uses of Nano Solar Cells: Since the manufacturing cost of conventional solar cells is one of the biggest drawbacks, this new technology could have some impressive effects on our daily lives. It would help preserve the environment, decrease soldiers carrying loads, provide electricity for rural areas, and have a wide array of commercial applications due to its wireless capabilities. Inexpensive solar cells, which would utilize nanotechnology, would help preserve the environment. According to Engineer Magazine, Konarka Technologies is already proposing, coating existing roofing materials with its plastic photovoltaic cells. If it were inexpensive enough to cover a homes entire roof with solar cells, then enough energy could be captured to power almost the entire house [9]. If many houses did this then our dependence on the electric grid (fossil fuels) would decrease and help reduce pollution. Some people have even proposed covering cars with solar cells or making solar cell windows. Even though their efficiency is not very great, if solar cells were inexpensive, then enough of them could be used to generate sufficient electricity.

Inexpensive solar cells would also help provide electricity for rural areas or third world countries. Since the electricity demand in these areas is not high, and the areas are so distantly spaced out, it is not practical to connect them to an electrical grid. However, this is an ideal situation for solar energy. If it were inexpensive enough, it could be used for lighting, hot water, medical devices, and even cooking It would greatly improve the standard of living for millions, possibly even billions of people! Finally, inexpensive solar cells could also revolutionize the electronics industry.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

35

Consequently, even though conventional solar cells are expensive and cannot yet achieve high efficiency, it may be possible to lower the manufacturing costs using nanotechnology. Institutions such as the University of California Berkeley and Konarka Technologies are actively pursuing ways to make this happen. Although solar cells are not efficient enough to replace large-scale electric grids, there are many opportunities for them to be used for low power devices. The effects that a low cost, reasonably efficient (low power) solar cell would have on society are tremendous. It would help preserve the environment, protect soldiers, provide rural areas with electricity, and transform the electronics industry. Conclusion: Nanotechnology changed many areas of technology. Development of new Nanomaterials changed many electronics materials. The new electronic semiconductor devices altered by nanotechnologies are showing better applications compare to conventional one. The nanosolar devices, which developed by using nonmaterials, are more efficient and cheaper. In green building, the use of conventional solar cells for harnessing solar energy can be replaced by Nano solar cells. By this way we can achieve the target of most efficient energy in Green buildings. Acknowledgement: I am very thanking full of the Civil Engineering Dept. of Jagan Nath University, Jaipur who is going to organize a national level conference on Green buildings in may 2012. This review paper I wrote as an introductory paper on nanosolar cells for the use in green buildings. References: 1. http://www.calrecycle.ca.gov/greenbuilding/basics.htm 2. http://www.wikipedia.org 3. http://edugreen.teri.res.in/explore/renew/solar.htm 4. http://www.technologystudent.com/energy1/solar1.htm 5. http://www.nanosolar.com/technology 6. 7. 8. 9. Aldous, Scott. How Solar Cells Work. How Stuff Works. 22 May 2005 Power Plastic. Engineer Magazine. March 8, 2005 Choi, Charles. Nanotech Improving Energy Options. Space Daily. New York: May Nanostructured Solar Cells For High Efficiency Photovoltaics,Christiana B. Honsberg, Department of Electrical and Computer Engineering, University of Delaware, Newark, DE, USA 10. http://science.howstuffworks.com/solar-cell1.htm 36

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

Difficulties in Teaching English in India and Importance of the Bilingual Method Dr. Preeti Bala Sharma Assit. Prof., Dept. of English, Jagan Nath University, Jaipur.

A Teacher affects eternity, he can never tell where his influence stops - Henry Brooks Adams As we all know now the world has been reduced to the size of a global village and English is the most important and effective language having communicative and educative value. Speaking effectively, articulating and expressing ourselves clearly are the vital competencies especially in todays global landscape where English remains the Lingua Franca for exchanging ideas in business, science and technology. In India, according to recent surveys approximately 35 million speakers use English. It means there are very few countries in the world where English is taught on such a massive scale as in India. But English is still considered to be a foreign language in India and it is quite difficult for students to learn it for various reasons. So, teaching English is a challenge before us. There are various factors that affect teaching learning English as a foreign language in India. The Major factors among them are: English language teaching is not distinguished from teaching a subject like history. History is essentially an information-based subject and the number of students in a class does not matter when merely information is to be transmitted. On the other hand English is skill-based and it should be best imparted through individual effort and attention. But in reality the English classrooms remain so over-crowded that it appears like the dome of Satans conference and a teacher finds it impossible to give individual attention. Defects in Examination System: One universal lament in academia is that no educational reform has been possible in this country because efforts at all levels come to naught because of the prevailing system of yearly examination which has lost all credibility. A plethora of Bazaar guides and coaching shops have taken the place of real classroom teaching. Insufficient Time, Resources and Materials: The inter-disciplinary relation of the teaching and learning process brings home the fact that the problems of the teachers can be solved if we concentrate on the causes of the problems of the students. As there is a dearth of students workbooks, teachers handbooks and the necessary audio-visual material. English is at best treated as a poor relation in the overall scheme of the daily schedule of teaching in a college. English classes are fixed at odd hours. Main subjects like history, commerce or chemistry are given six periods a week whereas English (called General English) is disposed of with mere two or three periods. According to Prof.R.P.Bhatnagar, it is very often the case that the course books and readers prescribed are simply not teachable because of one of the following reasons Very difficult and uninteresting lessons are required to be taught, The subject of the lesson is much below the material and linguistic level of the students,

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

37

The total corpus to be taught is too slight and insubstantial to make any meaningful difference to the learners achievement. Lack of Participation: Most of the students lack an interest in learning English so they are unable to comprehend its technicalities. They find themselves poor in analyzing this subject critically as their grasping power is inadequate. As a result of this we find that students do not participate in group discussions. Moreover due to time constraints, teachers are also not able to make their classes interactive. The college students in Rajasthan come from various different social backgrounds. Most of them come from rural areas where English is considered as a monster. Besides all these major factors I personally believe that the problems which a learner of English (particularly from Rajasthan) faces are firstly, regarding the sound system secondly, regarding structure i.e. the arrangement of words into sentences and thirdly, about learning the vocabulary. These problems can be seen at the very beginning stage of learners. So it is difficult for a teacher of English to know how to teach under such circumstances. Areas of difficulty for the English learners in India are as follow: Grammatical Ghosts: Structural differences in English and Hindi pose big problems for the beginners. So the learner tends to build sentences in the foreign language in the same way as s/he does in her/his mother tongue. For an example: . Weh aam khaate he becomes we mangoes eat. Even the confusion to use s and es with the verb is a big problem for the English learners and the tendency of over-generalization and putting s in singular noun is a common mistake made by them. For example woman becomes womans, datas, and childs etc. The same is the case with the use of do and does and did in negative and interrogative sentences and the mistakes are repeated as far as the antonyms are concerned. For example complete becomes uncomplete rather than incomplete. Pronunciation Pitfalls: The Indian learners of English, who come from rural areas, have a tendency to pronounce silent speech sounds of English for example the word knight is pronounced as knait. Besides this, English speech sounds /s/ and / / also create problems for the learners. They use only /s/ sound. Therefore, /show/ is realized as /su/ and shakshi is realized as / saksi/. Besides these differences cultural differences and the habit to translate literally also cause confusion among English learners. For example: Swami Vivekanand ka janam Calcutta me hua tha. Weh mahaan vyakti the. Swami Vivekanand was born in Calcutta. They were great person. when they were given this sentence in Hindi to translate into English mera sar chakkar kha raha he they translated it like my head is eating circles. Other examples are as follows: main ek aadmi hu has been translated as I am a mango man; Sadak pe goliya

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

38

chal rahi he as tablets are walking on the road; and mughe English aati he into English comes to me. So in these conditions, the role of the English teacher and the classroom teaching is of paramount significance. All these above mentioned problems are a challenge to us as a teacher plays the most significant role in order to develop the English language skills in the students. S/he has to keep in mind the age of the student, his/her native language, his/her cultural background and his/her previous experience with English. For this I believe that bilingual method can be of the greatest assistance directly or indirectly to the learner as well as teacher of English. According to this method, two languages are used in the teaching of English or any other language. One language is the target language i.e. English and the other language is the learners mother tongue. Unlike direct method and translation-cum- grammar approach, in bilingual method the teacher teaches English by giving mother tongue equivalents of English words or sentences wherever required. Therefore in bilingual method use of mother tongue is allowed to give direction, elicit an answer or explain some difficult and new words. Teachers of English also use mother tongue to convey the meaning of new words, phrases, idioms, sentences etc. for example; to teach the word freedom; explosion; non-violence and if we go on explaining these words in front of learners, it wont work but if we give the word aajadi; dhamaka; and ahimsa to them, they will understand the meaning at once. The same can be done with idioms and phrases. For example if we have to teach the learners that it is raining cats and dogs and we give them the word like muslaadhaar to them ,then they will understand this phrase more clearly. Therefore, bilingual method helps in getting the meaning across as completely and as quickly as possible. As far as pronunciation of English sounds is concerned, this method is again a great asset. In the earlier stages of teaching pronunciation is not a very easy task. In English language there are 26 letters of alphabet but the sounds for them are 44 and learning these sounds is a great problem to the learners. Since a teacher cannot teach these sounds through explaining the articulations by projecting the organs of speech. Therefore, the mother tongue is again a great remedy for this problem because to teach English sounds, the closest natural equivalent Hindi letters of alphabet can be used to teach English sounds. As an addition Kamil Bulkes Hindi- English Shabdkosh should be used to teach pronunciation to Indian learners of English. Therefore, this method accomplishes what probably no other method can do so directly and sensitively. Through these efforts the teachers of English can create a total language event that immediately brings home to the learners how sounds are pronounced. To conclude, it is very clear from the above mentioned discussion that for an effective teaching of English in a country like India, the bilingual method is the most appropriate method and no one should grudge the judicious use of the mother tongue in teaching English. According to Michael Byram,

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

39

The bilingual method proceeds step by step under careful guidance with continual feedback, ensuring that prerequisite sub-or part skills are acquired before a final stage of free and spontaneous language use. Learners are led from knowing nothing about a language situation to complete mastery of one situation to a mastery of sentence variations and combinations to forays into new, unknown and unforeseeable communication situations. References: Oxford Language Reference, Oxford University Press-2001. Crystal, David. The Cambridge Encyclopedia of the English Language, Cambridge; CUP-1997 The world Fact book. http://www.cia.gov/cia/pu blication/ factbook/ index.html. Byram, Michael (2004). Routledge Encyclopedia of Language Teaching and Learning. Routledge Publication: New York. P:85.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

40

Our Biotech Future Dr. Vikas Bishnoi Assit. Prof., Dept. of Bio Tech. Engineering, Jagan Nath University, Jaipur. It has become part of the accepted wisdom to say that the twentieth century was the century of physics and the twenty-first century will be the century of biology. Two facts about the coming century are agreed on by almost everyone. Biology is now bigger than physics, as measured by the size of budgets, by the size of the workforce, or by the output of major discoveries; and biology is likely to remain the biggest part of science through the twenty-first century. Biology is also more important than physics, as measured by its economic consequences, by its ethical implications, or by its effects on human welfare. These facts raise an interesting question. Will the domestication of high technology, which we have seen marching from triumph to triumph with the advent of personal computers and GPS receivers and digital cameras, soon be extended from physical technology to biotechnology? I believe that the answer to this question is yes. Here I am bold enough to make a definite prediction. I predict that the domestication of biotechnology will dominate our lives during the next fifty years at least as much as the domestication of computers has dominated our lives during the previous fifty years. I see a close analogy between John von Neumanns blinkered vision of computers as large centralized facilities and the public perception of genetic engineering today as an activity of large pharmaceutical and agribusiness corporations such as Monsanto. The public distrusts Monsanto because Monsanto likes to put genes for poisonous pesticides into food crops, just as we distrusted von Neumann because he liked to use his computer for designing hydrogen bombs secretly at midnight. It is likely that genetic engineering will remain unpopular and controversial so long as it remains a centralized activity in the hands of large corporations. I see a bright future for the biotechnology industry when it follows the path of the computer industry, the path that von Neumann failed to foresee, becoming small and domesticated rather than big and centralized. The first step in this direction was already taken recently, when genetically modified tropical fish with new and brilliant colors appeared in pet stores. For biotechnology to become domesticated, the next step is to become user-friendly. Every orchid or rose or lizard or snake is the work of a dedicated and skilled breeder. There are thousands of people, amateurs and professionals, who devote their lives to this business. Now imagine what will happen when the tools of genetic engineering become accessible to these people. There will be do-ityourself kits for gardeners who will use genetic engineering to breed new varieties of roses and orchids. Also kits for lovers of pigeons and parrots and lizards and snakes to breed new varieties of pets. Breeders of dogs and cats will have their kits too. Domesticated biotechnology, once it gets into the hands of housewives and children, will give us an explosion of diversity of new living creatures, rather than the monoculture crops that the big corporations prefer. New lineages will proliferate to replace those that monoculture farming and deforestation have destroyed. Designing genomes will be a personal thing, a new art form as creative as painting or sculpture.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

41

Few of the new creations will be masterpieces, but a great many will bring joy to their creators and variety to our fauna and flora. The final step in the domestication of biotechnology will be biotech games, designed like computer games for children down to kindergarten age but played with real eggs and seeds rather than with images on a screen. Playing such games, kids will acquire an intimate feeling for the organisms that they are growing. The winner could be the kid whose seed grows the prickliest cactus, or the kid whose egg hatches the cutest dinosaur. These games will be messy and possibly dangerous. Rules and regulations will be needed to make sure that our kids do not endanger themselves and others. The dangers of biotechnology are real and serious. Green Technology The domestication of biotechnology in everyday life may also be helpful in solving practical economic and environmental problems. Once a new generation of children has grown up, as familiar with biotech games as our grandchildren are now with computer games, biotechnology will no longer seem weird and alien. In the era of Open Source biology, the magic of genes will be available to anyone with the skill and imagination to use it. The way will be open for biotechnology to move into the mainstream of economic development, to help us solve some of our urgent social problems and ameliorate the human condition all over the earth. Open Source biology could be a powerful tool, giving us access to cheap and abundant solar energy. A plant is a creature that uses the energy of sunlight to convert water and carbon dioxide and other simple chemicals into roots and leaves and flowers. To live, it needs to collect sunlight. But it uses sunlight with low efficiency. The most efficient crop plants, such as sugarcane or maize, convert about 1 percent of the sunlight that falls onto them into chemical energy. Artificial solar collectors made of silicon can do much better. Silicon solar cells can convert sunlight into electrical energy with 15 percent efficiency, and electrical energy can be converted into chemical energy without much loss. We can imagine that in the future, when we have mastered the art of genetically engineering plants, we may breed new crop plants that have leaves made of silicon, converting sunlight into chemical energy with ten times the efficiency of natural plants. These artificial crop plants would reduce the area of land needed for biomass production by a factor of ten. They would allow solar energy to be used on a massive scale without taking up too much land. They would look like natural plants except that their leaves would be black, the colour of silicon, instead of green, the colour of chlorophyll. The question I am asking is, how long will it take us to grow plants with silicon leaves? If the natural evolution of plants had been driven by the need for high efficiency of utilization of sunlight, then the leaves of all plants would have been black. Black leaves would absorb sunlight more efficiently than leaves of any other colour. Obviously plant evolution was driven by other needs and in particular by the need for protection against overheating. For a plant growing in a hot climate, it is advantageous to reflect as much as possible of the sunlight that is not used for growth. There is plenty of sunlight, and it is not important to use it with maximum efficiency. The plants have evolved with chlorophyll in their leaves to absorb the useful red and blue components of sunlight and to reflect the green. That is why it is reasonable for plants in tropical climates to be green. But this logic does not explain why plants in cold climates where sunlight is scarce are also green. We could imagine that in a place like Iceland, overheating would not be a problem, and plants with black leaves using sunlight more efficiently would

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

42

have an evolutionary advantage. For some reason which we do not understand, natural plants with black leaves never appeared. Why not? Perhaps we shall not understand why nature did not travel this route until we have travelled it ourselves. After we have explored this route to the end, when we have created new forests of black-leaved plants that can use sunlight ten times more efficiently than natural plants, we shall be confronted by a new set of environmental problems. Who shall be allowed to grow the black-leaved plants? Will black-leaved plants remain an artificially maintained cultivar, or will they invade and permanently change the natural ecology? What shall we do with the silicon trash that these plants leave behind them? Shall we be able to design a whole ecology of silicon-eating microbes and fungi and earthworms to keep the black-leaved plants in balance with the rest of nature and to recycle their silicon? The twenty-first century will bring us powerful new tools of genetic engineering with which to manipulate our farms and forests. With the new tools will come new questions and new responsibilities? Rural poverty is one of the great evils of the modern world. The lack of jobs and economic opportunities in villages drives millions of people to migrate from villages into overcrowded cities. The continuing migration causes immense social and environmental problems in the major cities of poor countries. The effects of poverty are most visible in the cities, but the causes of poverty lie mostly in the villages. What the world needs is a technology that directly attacks the problem of rural poverty by creating wealth and jobs in the villages. A technology that creates industries and careers in villages would give the villagers a practical alternative to migration. It would give them a chance to survive and prosper without uprooting themselves. The shifting balance of wealth and population between villages and cities is one of the main themes of human history over the last ten thousand years. The shift from villages to cities is strongly coupled with a shift from one kind of technology to another. Roughly speaking, green technology is the technology that gave birth to village communities ten thousand years ago, starting from the domestication of plants and animals, the invention of agriculture, the breeding of goats and sheep and horses and cows and pigs, the manufacture of textiles and cheese and wine. Gray technology is the technology that gave birth to cities and empires five thousand years later, starting from the forging of bronze and iron, the invention of wheeled vehicles and paved roads, the building of ships and war chariots, the manufacture of swords and guns and bombs. Gray technology also produced the steel plows, tractors, reapers, and processing plants that made agriculture more productive and transferred much of the resulting wealth from village-based farmers to city-based corporations. For the first five of the ten thousand years of human civilization, wealth and power belonged to villages with green technology, and for the second five thousand years wealth and power belonged to cities with gray technology. Beginning about five hundred years ago, gray technology became increasingly dominant, as we learned to build machines that used power from wind and water and steam and electricity. In the last hundred years, wealth and power were even more heavily concentrated in cities as gray technology raced ahead. As cities became richer, rural poverty deepened.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

43

Within a few more decades, as the continued exploring of genomes gives us better knowledge of the architecture of living creatures, we shall be able to design new species of microbes and plants according to our needs. The way will then be open for green technology to do more cheaply and more cleanly many of the things that gray technology can do, and also to do many things that gray technology has failed to do. Green technology could replace most of our existing chemical industries and a large part of our mining and manufacturing industries. Genetically engineered earthworms could extract common metals such as aluminum and titanium from clay, and genetically engineered seaweed could extract magnesium or gold from seawater. Green technology could also achieve more extensive recycling of waste products and worn-out machines, with great benefit to the environment. An economic system based on green technology could come much closer to the goal of sustainability, using sunlight instead of fossil fuels as the primary source of energy. New species of termite could be engineered to chew up derelict automobiles instead of houses, and new species of tree could be engineered to convert carbon dioxide and sunlight into liquid fuels instead of cellulose. Before genetically modified termites and trees can be allowed to help solve our economic and environmental problems, great arguments will rage over the possible damage they may do. Many of the people who call themselves green are passionately opposed to green technology. But in the end, if the technology is developed carefully and deployed with sensitivity to human feelings, it is likely to be accepted by most of the people who will be affected by it, just as the equally unnatural and unfamiliar green technologies of milking cows and ploughing soils and fermenting grapes were accepted by our ancestors long ago. Future generations of people raised from childhood with biotech toys and games will probably accept it more easily than we do. Nobody can predict how long it may take to try out the new technology in a thousand different ways and measure its costs and benefits. In a country like India with a large rural population, bringing wealth to the villages means bringing jobs other than farming. Most of the villagers must cease to be subsistence farmers and become shopkeepers or schoolteachers or bankers or engineers or poets. In the end the villages must become gentrified, as they are today in England, with the old farm workers cottages converted into garages, and the few remaining farmers converted into highly skilled professionals. It is fortunate that sunlight is most abundant in tropical countries, where a large fraction of the worlds people live and where rural poverty is most acute. Since sunlight is distributed more equitably than coal and oil, green technology can be a great equalizer, helping to narrow the gap between rich and poor countries.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

44

5G Wirelesses - The Next Step in Internet Technology Sudarshan Kumar Jain Department of Elec. & Comm., Jagan Nath University, Jaipur. With the day by day advancement in technologies, mobile phones are also getting more enhanced. In every 8 to 10 years we see new technologies in the mobile market first came the 1G, after few years 2G came to surface, then 3G, nowadays the very popular and many say super fast technology 4G and now there will come 5G Fifth Generation technology. This latest and more enhanced technology will provide a unique experience to its users. As developers of cellular phones and services introducing new technologies, customers are no longer ignorant of these technologies but in fact they are getting more aware of these technologies. Now customers are seeking suitable packages, packages that contain all the latest technology and features that a mobile can have. After the launch of 4G technology in some countries like Unites States of America and United Kingdom, research towards the next generation of mobiles that is the fifth generation. The name 5G has not been used by any standardization bodies or any company in telecom sector since 4G standards havent been standardized fully. After studying the increase in data rates from generation to generation, one can roughly predict that maximum data rates i.e. uploading or downloading rates in 5G would be 1 Gbps! This generation is expected to be rolled out in 2020 . 5G network is assumed as the perfection level of wireless communication in mobile technology. Cable network is now become the memory of past. Mobiles are not only a communication tool but also serve many other purposes. All the previous wireless technologies are entertaining the ease of telephone and data sharing but 5G is bringing a new touch and making the life real mobile life. The new 5G network is expected to improve the services and applications offered by it. 5G Network is very fast and reliable. The concept of handheld devices is going to be revolutionized with the advent of 5G.Now all the services and applications are going to be accessed by single IP as telephony, gaming and many other multimedia applications. As it is not a new thing or gadget in market and there are millions of users all over the world who have experienced the wireless services and till now they are addicted to this wireless technology 5G Network Features A first remarkable feature of 5G network is the broadband internet in mobile phones that would be possible to provide internet facility in the computer by just connecting the mobile handheld Computer: Data sharing in 5G network is very easy. It omits the condition of putting both mobile face to face so that data could be shared. But 5G Bluetooth technology removes this condition and data could be transferred if it is shared in the range of 50m.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

45

According to some research papers on 5G technology, the main features the technology might have are as follows:

A 5G user might be able to connect to different networks at same time or switch between two networks. These networks need not be 5G networks but they can be of any generation. The technology of 5G will present a high and sharp resolution for passionate mobile phone daily consumers and will provide huge and fast internet access shaping Introduction of a new radio system is possible in which different radio technologies will share the same spectrum. This can be done by finding unused spectrum and then adapting to the technology of the radio technology with which the spectrum is being shared. Every mobile in a 5G network will have an IP address (IPV6) according to the location and network being used. 5G technology is expected to bring a single global standard. The technology is also expected to support virtual private networks and advanced billing interfaces. With 5G enabled phone , one might be able to connect his phone to laptop to get access to broadband. With the help of remote administration obtainable by the technology of 5G a consumer can also get comfort and relax by having improved and rapid clarification in just less time. The other few features that might be offered by 5G are transporter class gateway, subscriber supervision tools, remote diagnostics etc.

So 5G enabled smart phones will be a great challenge to laptops due to the extraordinary features offered. With thousands of mobile applications a user will do most of the tasks he performs on his laptop on his 5G smart phones.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

46

Error control coding for next generation wireless system M. L. Saini Assistant Professor, Department of CS/IT Jagan Nath University, Jaipur Present days wireless systems are undergoing rapid technological changes. Development of a variety of handheld devices, like Mobiles, PDAs, IPODs etc have revolutionized the wireless communication system. Under the present scenario, the requirement in telecommunication is the development of high capacity wireless network. Though data broadcasting is already on the air for quite some time, the multimedia aspects was not explored until recently. Due to ever-increasing bandwidth demands in future wireless service, the radio frequency band becomes more and more invaluable. The present days wireless systems, based on cellular structure, have historically been designed with voice traffic in mind. Presently second generation (2G) of network is operating and third generation (3G) is almost implemented. The third generation 3G systems provides a significantly higher data rate (64 Kbps 2 Mbps) as compared to 2G systems (9.614.4 Kbps), The spectral efficiency of 3G networks is too low to support high data rate services at low cost, limiting the usefulness of such services. The next generation wireless communications systems need to be of a higher standard so as to support various Broadband wireless services, such as HDTV (4-20 Mbps), Video conferencing, mobile videophones, high-speed Internet access, computer network applications (1- 100 Mbps), and potentially many multimedia applications at a significantly reduced cost, in comparison with 3G networks. The existing wireless systems can hardly provide transmission capacity of the order of few Mbps. To deliver multimedia/ broadband signals at remotely distributed cells, wireless transmission channels are no more able to fulfill the demands of higher bandwidth. For exploiting the wideband capabilities of the mobile/wireless network, researchers have observed that millimeter (mm-wave) waves when transmitted can provide very high bandwidth. Although, due to high attenuation losses (16 dB/km) at 60 GHz , the overall transmission distance is limited to a relatively shorter span. Optical fiber is well known as a transmission media with an enormous transmission capacity of about 4 Tbps for the 1.55-m-wavelength region where erbium doped fiber amplifiers (EDFA) are most effective. Millimeter waves and optical fiber can therefore provide data capacity of the order of hundreds of Mbps and Tbps respectively. Hence the requirements of broadband wireless system can be achieved through the integration of optical fiber and millimeter wireless systems. We suggest modified wireless system with optical fiber as feeder network as an up gradation of existing wireless network in terms of transmission capability. Code Designs The design of codes based on graphs can be understood as a multi-variable multi-constraint optimization problem. The constraints of this problem are the performance requirements, flexibility (i.e., block lengths, rates, etc.) and encoding/decoding complexity. The figure shows an illustration of the constraints and variables involved in the design of codes defined on graphs.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

47

Encoding/Decoding Algebraic Structures Pseudo-Radom Structures Block Codes Convolutional Codes Interconnection Minimization Block Length Constraint Length Distances Optimization Rate Compatibility Matching for Turbo Eq. Channel Matching Complexity Flexibility Figure: Design of codes defined on graphs as an optimization problem. Performance Girth Maximization

One of the first decisions that should be taken when designing codes defined on graphs is whether their graphs should have a pseudo-random or an algebraic or a combinatorial underlying structure. These different structures have their advantages, as well as, their downsides. For instance, pseudo-random structures provide the code designer with lots of freedom. The codes originating from these structures can have practically any rate and block-lengths. However, these codes prove themselves to be difficult to implement because of their complete lack of regularity. On the other hand, algebraic and combinatorial designs, which we will call structured designs from now on, can not exist for all rates and block-lengths. This happens because the algebraic or combinatorial constructs used are based on group and number theory, and, therefore, they are inherently of quantized nature, mainly based on prime numbers. Another characteristic of structured designs is that it is normally possible to obtain good codes for small up to medium block-lengths. Otherwise, pseudo-random designs have better performance for long block lengths. From an implementation point of view, structured designs have a lot of advantages. For instance, the regular connections in their graphs facilitate tremendously the hardware implementation of the decoders for these codes.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

48

Physical characterizations of nano-materials by physical instrumentation Pramod kumar Dept. Of physics, Jagan Nath University Jaipur INTRODUCTION: A nanomaterial is a field that takes a materials science-based approach to nanotechnology. It studies materials with morphological features on the nanoscale, and especially those that have special properties stemming from their nanoscale dimensions. Nanoscale is usually defined as smaller than a one tenth of a micrometer in at least one dimension, though this term is sometimes also used for materials smaller than one micrometer.Nanomaterials and Nanotechnologies attract tremendousattention in recent researches. New physical properties and new technologies both in sample preparation and device fabrication evoke on account of the development of nanoscience. Various research fields including physics, chemists, material scientists, and engineers of mechanical and electrical are involved in this research. In this review various methods of preparing nanomaterials and physical characterizations are discussed. We express the exotic physical properties concerning the linear and nonlinear optical spectra, temperature dependence of resistivities, spin resonance spectra, and magnetic susceptibility measurements. Materials referred to as "nanomaterials" generally fall into two categories: fullerenes, and inorganic nanoparticle. Physical characterization plays an important role in assessing the properties of materials. Nanomaterials can be assessed by methods discussed here.

SAMPLE PREPARATION: The first discovered nanomaterials was prepared by vacuum evaporation of iron in inert gas and condensed in cooled substrates After then many methods to fabricate nanoparticles including inorganic ceramics and organic compound are developed, such as arc plasma torch to produce metallic powder, laser induced chemical vapor deposition method (CVDM) to produce special compounds, and microwave plasma enhanced CVD to produce hard and brittle materials. Instead of chemical vapor, the liquid coprecipitation can produce single-phase compounds and the solid-state thermal decomposition can produce single-phase oxide metals. PHYSICAL CHARECTERIZATIONS: 1. X-Ray Diffraction Waves of wavelength comparable to the crystal lattice spacing are strongly scattered (diffracted). Analysis of the diffraction pattern allows to obtain information such as lattice parameter, crystal structure, sample orientation, and particle size. In a typical set-up, a collimated beam of X-rays is incident on the sample. The intensity of the diffracted X-rays is measured as a function of the diffraction angle. The intensities of the spots provide information about the atomic basis. The sharpness and shape of the spots are related to the perfection of the crystal. The two basic procedures involve either a single crystal or a powder. With single crystals, a lot of info about the structure can be obtained. On the other hand, single crystals might not be readily available and orientation of the crystal is not straightforward.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

49

One disadvantage of XRD is the low intensity of diffracted beam for low-Z materials of an inelastic process whereby the energy of the incoming light is changed. The others are elastic processes where the intensity is changed. Typical penetration depth is of the order of 50 nm. Optical spectroscopy is attractive for materials characterization because it is fast, nondestructive and of high resolution. 2. Optical spectroscopy: Optical spectroscopy uses the interaction of light with matter as a function of wavelength or energy in order to obtain information about the material. For example, absorption or emission (photoluminescence or PL) experiments with visible and UV light tends to reveal the electronic structure. Vibration properties of the lattice (i.e., phonons) are usually in the IR and studied either using IR absorption or Raman spectroscopy. Raman is an example of an inelastic process whereby the energy of the incoming light is changed. The others are elastic processes where the intensity is changed. Typical penetration depth is of the order of 50 nm. Optical spectroscopy is attractive for materials characterization because it is fast, nondestructive and of high resolution. 2.1. UV Spectroscopy This technique involves the absorption of near-UV or visible light. One measures both intensity and wavelength. It is usually applied to molecules and inorganic ions in solution. A broad feature makes it not ideal for sample identification. However, one can determine the analytical concentration from absorbance at one wavelength and using the Beer-Lambert law: A= - log I/Io = a*b*c where a = absorbance, b = path length, and c = concentration. A schematic of the technique is shown in Fig.2, together with a sample data.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

50

Figure 2: Schematic of UV-vis spectrophotometer, an apparatus and a sample data on ZnO nanoparticles 2.2. Absorption spectroscopy Atomic absorption spectroscopy (AAS) is a spectroanalytical procedure for the quantitative determination of chemical elements employing the absorption of optical radiation (light) by free atoms in the gaseous state. In analytical chemistry the technique is used for determining the concentration of a particular element (the analyte) in a sample to be analyzed. AAS can be used to determine over 70 different elements in solution or directly in solid samples. Atomic absorption spectrometry was first used as an analytical technique, and the underlying principles were established in the second half of the 19th century by Robert Wilhelm Bunsen and Gustav Robert Kirchhoff, both professors at the University of Heidelberg, Germany. The modern form of AAS was largely developed during the 1950s by a team of Australian Chemists. They were led by Sir Alan Walsh at the CSIRO (Commonwealth Scientific and Industrial Research Organization), Division of Chemical Physics, in Melbourne, Australia. In This approach an atomic spectra generates. There are a wide range of experimental approaches to measuring absorption spectra. The most common arrangement is to direct a generated beam of radiation at a sample and detect the intensity of the radiation that passes through it. The transmitted energy can be used to calculate the absorption. The source, sample arrangement and detection technique vary significantly depending on the frequency range and the purpose of the experiment. A figure is shown in Fig given below.

Modern atomic absorption spectrometers

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

51

2.3. Photoluminescence Spectroscopy Photoluminescence spectroscopy is a contactless, nondestructive method of probing the electronic structure of materials. Light is directed onto a sample, where it is absorbed and imparts excess energy into the material in a process called photo-excitation. One way this excess energy can be dissipated by the sample is through the emission of light, or luminescence. In the case of photo-excitation, this luminescence is called photoluminescence. The intensity and spectral content of this photoluminescence is a direct measure of various important material properties. Photo-excitation causes electrons within the material to move into permissible excited states. When these electrons return to their equilibrium states, the excess energy is released and may include the emission of light (a radiative process) or may not (a non radiative process). The energy of the emitted light (photoluminescence) relates to the difference in energy levels between the two electron states involved in the transition between the excited state and the equilibrium state. The quantity of the emitted light is related to the relative contribution of the radiative process.

Setup for time-resolved photoluminescence spectroscopy

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

52

Time-dependent photoluminescence intensity for three different nanotubes showing the decay dynamics of the excited state. The corresponding lifetimes are derived from fitting with a mono-exponential decay function. 3. Optical Microscope The optical microscope remains the fundamental tool for phase identification. The optical microscope magnifies an image by sending a beam of light through the object as seen in the schematic diagram of Figure below. The condenser lens focuses the light on the sample and the objective lenses (10X, 40X, . . . , 2000X) magnifies the beam, which contains the image, to the projector lens so the image can be viewed by the observer.

Figure. Schematic diagram of the optical micrograph In order for any specimen to be observed, the sample must first be ground using sandpaper of different grain sizes. Then the sample needs to polished into a mirror like image and then etched with a solution for a certain length. Careful technique is critical in sample preparation for without it, the optical microscope is useless. 4. Transmission Electron Microscope (TEM) and Scanning Electron Microscope (SEM) Electron beams can be used to produce images. The basic operation in a transmission electron microscope (TEM) is for electrons to be generated from an electron gun, which are then scattered by the sample, focused using electrostatic lenses, and finally form images. A typical accelerating voltage is 100 kV for which the electrons have mean free paths of the order of a few tens of nm for light elements and a few hundreds of nm for heavy elements. These would be the ideal film thicknesses since much thinner films would lead to little scattering and much thicker ones would lead to too many scattering of the same electron resulting in a blurred image of low resolution. The imaging mode can be controlled by the use of an aperture. If most of the unscattered electron is allowed through, the resulting image is called a bright field image. If specimen scattered beams are selected, the image is known as a dark field image. In addition to forming images, a TEM can be used for chemical analysis and melting-point determination. If

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

53

the TEM is operated in scanning mode, it is known as a scanning electron microscope or SEM. Electrons scattered from the sample are collected on a CRT to form the image. The Resolution is a few nm and magnification is from 10 to 500,000 times. One such SEM is shown in Fig. below

Scanning-Tunneling Microscope The scanning-tunneling microscope (or STM) is one of the most powerful microscopes available. It provides atomic-scale resolution of surfaces and is also being developed to move atoms on surfaces. According to its inventors, G. Binnig and H. Rohrer of IBM Zurich, it was first operational in 1981. They won the 1986 Nobel Prize for this work. The STM relies on a purely quantum-mechanical phenomenon: tunneling. The STM relies on the fact that electrons near surfaces have wave functions which decay into the vacuum outside the surface boundary. The microscope consists on a conducting tip connected to a current-measuring circuit. When the tip is in close proximity to the surface (1 nm), the decaying wave function from the surface could overlap with the tip, since the latter is conducting, the electron under a voltage field can then move creating a current. This current is known as a tunneling current whose magnitude (as for all tunneling currents) is very sensitive to the surface. The current should also depend on the density of electron states. A constant current would correspond to a constant altitude of the tip with respect to the surface Hence a constant-current scan of the two-dimensional plane reveals the surface structure. The tip motion can then be converted into a grayscale image. The other mode of operation of an STM is in constant height mode Note that, classically, the system consists of an open circuit, hence there should be no current. 5. Atomic Force Microscopy The AFM differs from the STM in that what is being measured is the force between the sample and the tip. The AFM operates like a record player except that it has exible cantilevels, sharp tips, and a force feedback system . The spring constant of the cantilever is of the order of 0.1 N/m which is about ten times more exible than a slinky. Since no electric current is involved, the tip/sample does not have to be metallic. There are two modes of operation: contact mode whereby the sample-tip distance is so small that the important force is the core-core repulsive one, and noncontact mode where the force is the van der Waals one. AFM's can achieve a resolution of 10 pm.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

54

The Atomic Force Microscope was developed to overcome a basic drawback with STM - that it can only image conducting or semiconducting surfaces. The AFM, however, has the advantage of imaging almost any type of surface, including polymers, ceramics, composites, glass, and biological samples. Binnig, Quate, and Gerber invented the Atomic Force Microscope in 1985. Their original AFM consisted of a diamond shard attached to a strip of gold foil. The diamond tip contacted the surface directly, with the interatomic van der Waals forces providing the interaction mechanism. Detection of the cantilevers vertical movement was done with a second tip - an STM placed above the cantilever. Today, most AFMs use a laser beam deflection system, introduced by Meyer and Amer, where a laser is reflected from the back of the reflective AFM lever and onto a position-sensitive detector. AFM tips and cantilevers are micro-fabricated from Si or Si3N4. Typical tip radius is from a few to 10s of nm. A AFM is shown below.

Atomic Force Microscopy CONCLUSION: Nanomaterials which are now applicable many areas of industries and product development thats why its characterizations become very important for the selection and design. The various methods disused above for the physical characterizations of Nanomaterials. The methods for mechanical and other properties are needed to assess for the Nanomaterials REFERANCES: 1. www.wikipedia.org 2. Chapter 5, Nanomaterials: Characterization P. S. Hale et al., J. Chem. Educ. 82 (5), 775 (2005). Growth kinetics and modeling of ZnO nanoparticles. 3. http://www.sv.vt.edu/classes/MSE2094_NoteBook/96ClassProj/experimental/optical.html 4. http://www.nanoscience.com/education/afm.html

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

55

Energy consumption & performance improvements of Green cloud computing Mithilesh Kumar Dubey & Navin Kumar Dept. of CS/IT, Jagan Nath Universty Jaipur ABSTRACT: Green Cloud computinglarge-scale shared IT infrastructure available over the internetis transforming the way corporate IT services are delivered and managed. To assess the environmental impact of Green cloud computing technology, consulting and outsourcing companyand WSP Environment & Energya global consultancy dedicated to environmental and sustainability issuesto compare the energy use and carbon footprint for businesses. The analysis of Green cloud computing focused on three of Microsofts mainstream business applicationsMicrosoft Exchange, Microsoft SharePoint and Microsoft Dynamics CRM. Each application is available both as an on-premise version and as Green cloud-based equivalent. The team compared the environmental impact of Green cloud-based vs. on-premise IT delivery on a per-user basis and considered three different deployment sizessmall (100 users), medium (1,000 users) and large (10,000 users). The study found that, for large deployments, Energy use and emissions can be reduced by more than 90 percent with a shared Green cloud service. There are Several key factors enable cloud computing to lower energy use and carbon emissions from IT: Dynamic Provisioning: Reducing wasted computing resources through better matching of server capacity with actual demand. Multi-Tenancy: Flattening relative peak loads by serving large numbers of organizations and users on shared infrastructure. Server Utilization: Operating servers at higher utilization rates. Data Center Efficiency: Utilizing advanced data center infrastructure designs that reduce power loss through improved cooling, power conditioning, etc. Though large organizations can lower energy use and emissions by addressing some of these factors in their own data centers, providers of public cloud infrastructure are best positioned to reduce the environmental impact of IT because of their scale. By moving applications to Green cloud services offered by Microsoft or other providers, IT decision-makers can take advantage of highly efficient cloud infrastructure, effectively outsourcing their IT efficiency investments while helping their company achieve its sustainability goals. Beyond the commonly cited benefits of cloud computingsuch as cost savings and increased agilitycloud computing has the potential to significantly reduce the carbon footprint of many business applications. The Green clouds unprecedented economies of scale reduce overall cost and increase efficiencies, especially when replacing an organizations locally operated on-premise servers. INTRODUCTION Green computing or green IT, refers to environmentally sustainable computing or IT. Green computing as "the study and practice of designing, manufacturing, using, and disposing of computers, servers, and

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

56

associated subsystemssuch as monitors, printers, storage devices, and networking and communications systemsefficiently and effectively with minimal or no impact on the environment." Green IT also strives to achieve economic viability and improved system performance and use, while abiding by our social and ethical responsibilities. Thus Green IT includes the dimensions of environmental sustainability, the economics of energy efficiency, and the total cost of ownership, which includes the cost of disposal and recycling. It is the study and practice of using computing resources efficiently. Green cloud computing is a buzzword that refers to the potential environmental benefits that information technology IT services delivered over the Internet can offer society. The term combines the words green -- meaning environmentally friendly -- and cloud, the traditional symbol for the Internet and the shortened name for a type of service delivery model known as cloud computing According to market research cloud computing could lead to a potential 38% reduction in worldwide data center energy expenditures by 2020. The savings would be primarily achieved by consolidating data centers and maximizing power usage efficiency (PUE), improving recycling efforts, lowering carbon and gas emissions and minimizing water usage in cooling the remaining centers. Because so much of a data centers energy expenditures support data storage, the Storage Networking Industry Association (SNIA) has promoted new technologies and architectures to help save energy. Advances in SAS drive technologies, automated Data deduplication, Storage Virtualization and storage convergence reduce the amount of physical storage a data center requires, which helps decrease its carbon footprint and lower operating expenditures (OPEX) and capital expenditures (CAPEX). Because the colour green is also associated with paper money, the label green cloud computing is sometimes used to describe the cost-efficiency of a cloud computing initiative. Both cloud computing and sustainability is emerging as transformative trends in business and society. Most consumers (whether they are aware of it or not) are already heavy users of cloud-enabled services, including email, social media, online gaming, and many mobile applications. The business community has begun to embrace cloud computing as a viable option to reduce costs and to improve IT and business agility. At the same time, sustainability continues to gain importance as a performance indicator for organizations and their IT departments. Corporate sustainability officers, regulators and other stakeholders have become increasingly focused on its carbon footprint, and organizations are likewise placing more emphasis on developing long-term strategies to reduce their carbon footprint through more sustainable operations and products. Green Cloud service providers are making significant investments in data center infrastructure to provide not only raw computing power but also Software-as-a-Service (SaaS) business applications for their customers. New data centers are being built at ever-larger scales and with increased server density, resulting in greater energy consumption. The Smart 2020 report Enabling the Low Carbon.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

57

Economy in the Information Age estimates that the environmental footprint from data centers will more than triple between 2002 and 2020, making them the fastest-growing contributor to the Information and Communication Technology (ICT) sectors carbon footprint. It stands to reason that consolidating corporate IT environments into large-scale shared infrastructure operated by specialized cloud providers would reduce the overall environmental impact and unlock new efficiencies. But does this assumption pass the test of a quantitative assessment on a per-user basis? Considerable research has been dedicated to understanding the environmental impact of data centers and to improving their efficiency. However, the aggregate sustainability impact of choosing a cloud-based application over an on-premise deployment for the same application has not been rigorously analyzed. Like broadband and other technologies provided by the ICT sector, cloud computing is emerging as a viable, scalable technology that can help significantly reduce carbon emissions by enabling new solutions for smart grids, smart buildings, optimized logistics and dematerialization. The Smart 2020 report estimates the potential impact of ICT-enabled solutions to be as much as 15 percent of total global carbon emissions (or 7.8 billion tons of CO2 equivalents per year). Broad adoption of cloud computing can stimulate innovation and accelerate the deployment of these enabled solutions. Consequently, cloud computing may have a major impact on global carbon emissions through indirect benefits in addition to the direct savings from replacement of on-premise infrastructure which are analyzed here. User Count: Number of provisioned users for a given application. Server Count: Number of production servers to operate a given application. Device Utilization: Computational load that a device (server, network device or storage array) is handling relative to the specified peak load. Power Consumption per Server: Average power consumed by a Server. Power Consumption for Networking and Storage: Average power consumed for networking and storage equipment in addition to server power consumption. Data Center Power Usage Effectiveness (PUE): Data center efficiency metric which is defined as the ratio of the total data center power consumption divided by the power consumption of the IT equipment. Power usage effectiveness accounts for the power overhead from cooling, power conditioning, lighting and other components of the data center infrastructure. Data Center Carbon Intensity: Amount of carbon emitted to generate the energy consumed by a data center, depending on the mix of primary energy sources (coal, hydro, nuclear, wind, etc.) and transmission losses. The carbon emission factor is a measurement of the carbon intensity of these energy sources. How Does Cloud Computing Reduce the Environmental Impact of IT? To understand the potential advantage of Green cloud computing in more detail, it is important to look at the distinct factors contributing to a lower per-user carbon footprint. These factors apply across cloud providers in general and are even relevant for many on-premise scenarios. This level of understanding can thus help IT executives target additional efficiency gains in an on-premise environment and realize additional performance advantages in the future.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

58

Generally speaking, the comparatively smaller carbon footprint of cloud computing is a consequence of both improved infrastructure efficiency and a reduced need for IT infrastructure to support a given user base. In turn, these primary levers are heavily influenced by four key factors Dynamic Provisioning Multi-Tenancy Server Utilization Data Center Efficiency (expressed by power usage effectiveness) Dynamic Provisioning IT managers typically deploy far more server, networking and storage infrastructure than is actually needed to meet application demand. This kind of over-provisioning typically results from: The desire to avoid ongoing capacity adjustments as demand fluctuates over time. Difficulty in understanding and predicting demand growth and peak loads. Budget policies that encourage using all available funds in a given year to avoid smaller allocations the following fiscal year. Over-provisioning is certainly understandable. Application availability is a high priority in IT operations, because IT executives want to avoid situations in which Business demand for services exceeds what IT can provide. Thus infrastructure planning is typically conducted with a conservative, just in case mindset that results in capacity allocation that is not aligned with actual demand. By contrast, Green cloud providers tend to manage capacity much more diligently, because overprovisioning at the clouds operational scale can be very expensive. Providers typically have dedicated resources to monitor and predict demand and continually adjust capacity, and their teams have developed greater expertise in demand modeling and in the use of sophisticated tools to manage the number of running servers. Thus, Green cloud providers can reduce the inefficiency caused by overprovisioning by optimizing the number of active servers to support a given user base. Multi-Tenancy Just as multiple tenants in an apartment building use less power overall than the same number of people owning their own homes, so do the multiple tenants of a cloud-provided infrastructure reduce their overall energy use and associated carbon emissions. The cloud architecture allows providers to simultaneously serve multiple companies on the same server infrastructure. Disparate demand patterns from numerous companies flatten overall demand peaks and make fluctuations more predictable. The ratio between peak and average loads becomes smaller, and that in turn reduces the need for extra infrastructure. Major cloud providers are able to serve millions of users at thousands of companies simultaneously on one massive shared infrastructure. By operating multi-tenant environments, cloud providers can reduce overhead for on-boarding and managing individual organizations and users.

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

59

The Microsoft cloud offerings analyzed in this study are relatively new and are currently experiencing rapid growth. The more mature a given cloud service becomes, the less the demand will fluctuate, resulting in even greater energy savings in the future. Server Utilization Green Cloud computing can drive energy savings by improving server utilization (the measurement of the portion of a servers capacity that an application actively uses). As large-scale cloud providers tend to run their infrastructure at higher and more stable utilization levels than corresponding on-premise operations, the same tasks can be performed with far fewer servers. Whereas a typical on-premise application may run at 5 to 10 percent average utilization rate, the same application in the cloud may attain 40 to 70 percent utilization, thus dramatically increasing the number of users served per machine. It is important to note that while servers running at higher utilization rates consume more power, the resulting increase is more than offset by the relative performance gains. As illustrated in Figure 3, increasing the utilization rate from 5 to 20 percent will allow a server to process four times the previous load, while power consumed by the server may only increase by 10 or 20 percent. Virtualization offers a strategy to improve server utilization for both cloud and on-premise scenarios by allowing applications to run in

100% Power Consumed Performance Increase Minor increase Power consumption

0%

5%

20%

100%

an environment separated from the underlying physical servers. Multiple virtual machines can share a physical server running at high utilization, which reduces the number of physical servers required to meet the same demand. IT organizations can scale individual virtual resources to fit application needs instead of allocating an entire physical system whose full capability is not utilized. In this way, virtualization provides a tool for IT departments to narrow the efficiency gap between on-premise deployment and a multi-tenant cloud service. Data Center Efficiency Data center designthe way facilities are physically constructed, equipped with IT and supporting infrastructure, and managedhas a major impact on the energy use for a given amount of computing

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

60

power. A common measure of how efficiently a data center uses its power is called power usage effectiveness ratio (PUE). Power usage effectiveness is defined as the ratio of overall power drawn by the data center facility to the power delivered to IT hardware. For example, a power usage effectiveness of 1.5 means that for every 1 kWh of energy consumed by IT hardware, the data center must draw 1.5 kWh of energy, with 0.5 kWh used for cooling of IT equipment, transforming and conditioning the grid power, lighting and other nonIT uses. Standardizing and measuring average power usage effectiveness across companies can be difficult. However, the US Environmental Protection Agency has released an update to its initial 2007 Report to Congress, stating an average power usage effectiveness of 1.91 for U.S. data centers, with most businesses averaging 1.97. Through innovation and economies of scale, cloud providers can significantly improve power usage effectiveness. Todays state-of-the-art data center designs for large cloud service providers achieve power usage effectiveness levels as low as 1.1 to 1.2. This efficiency gain could reduce power consumption over traditional enterprise data centers by 40 percent through data center design alone. Innovations such as modular container design, cooling that relies on outside air or water evaporation, or advanced power management through power supply optimization, are all approaches that have significantly improved power usage effectiveness in data centers. As Green cloud computing gains broader adoption and the share of data processing performed by modern data center facilities increases, the industrys PUE averages should improve. In parallel, new data center designs continue to push the envelope on driving greater efficiencies. These two trends will drive greater efficiency in data centers.

Green Data Center Other Important Factors In addition to the four primary drivers of Green cloud computing environmental advantage, other contributing factors are worth mentioning: Hardware comes with an embodied carbon footprint from the energy associated with producing, distributing and disposing of equipment. For the scenarios analyzed, this energy outlay adds about 10 percent to the footprint from IT operations. The total hardware impact depends heavily on the type of equipment, refresh cycles and end-of-life practices utilized. By optimizing hardware selection, management and disposal, cloud providers can outperform on-premise IT in terms of environmental impact. Green Cloud providers are more likely to take an active role in tailoring hardware components to the specific needs of the services they run. By collaborating with suppliers on specification and design of

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

61

servers and other equipment for maximum efficiency, they realize benefits that are too complex for most corporate IT departments to address. Application code and configuration provide additional opportunities for efficiency gainswith cloud providers more likely to take advantage of them. Developers can write applications with more efficient processing, memory utilization and data fetches, ultimately resulting in additional savings of physical consumption of central processing unit (CPU), storage, memory and network. The result is that less physical infrastructure is needed to deliver a given application workload.

Conclusion: As the efficiency of Green cloud computing increases, more services will develop and while each service or transaction will continue to use less energy, there is a strong possibility that, in aggregate, computing will use more energy over time. The challenge is to ensure that the services provided in the Green cloud actually replace current activities of higher carbon intensity. As an analogy, a study on music distribution shifting to an online model demonstrated significant carbon savingsas long as consumers do not also burn the downloaded music onto CDs.19 Green Cloud computing has enormous potential to transform the world of ITreducing costs, improving efficiency and business agility, and contributing to a more sustainable world. This study confirms that Green cloud computing can reduce energy use by 30 to 90 percent for major business applications today and that future energy savings are likely as Green cloud computing continues to evolve. Companies who adopt Green cloud computing will accrue the inherent business benefits of this technology, and will also play a crucial role in making IT more sustainable by significantly reducing energy consumption. References: 1. http://www.greencomputing.co.in/ 2. http://www.greenclouds.in/ 3. http://www.google.co.in/imges. 4. Science Tech Entrepreneur/Green computing 5. Cloud computing and sustainability/Environmental benefits of moving cloud/By Accenture industry 6. en.wikipedia.org/wiki/Green computing

Jagan Nath University STEM Bulletin Vol.1 No.2 April 2012

62

Anda mungkin juga menyukai