Anda di halaman 1dari 112





Volume: 11 | Issue: 03


` 100

Volume: 01 | Issue: 08 | Pages: 112 | may 2013

Patrol & Protect


Monitor Your Network With Cacti

A Peek Into The Top Network Monitoring Tools For Admins

Boost Your Business With Cloud Monitoring Tools

India us singapore malaysia ` 100 $ 12 s$ 9.5 mYR 19


Engineering, SUSE

Linux Jobs Are On The Rise Ralf Flaxa, Vice President,

A Look At Interview Questions Based On The Linux OS

Skava Is Looking To Hire Smart Programmers

9 770974 105001

32 42 46 50
A Primer on WSGI Unlock the Potential of QForms The Emergence of Open Source Players in the Database Space Heterogeneous Parallel Programming: Dive into the World of CUDA

54 58 62 64 71 82
Monitoring and Graphing Your Network With Cacti A Look at the Top Three Network Monitoring Tools Linux Firewall: Executing Iprules Using PHP A Peek Into Some Cloud Monitoring Tools Graphing Network Performance with MRTG Deploy Honeypots to Secure Your Network

38 Consistent Hashing With memcached

monitoring Cloud instances 67 with Ganglia and sFlow

32 bit MATE and 64-bit Cinnamon


OS4 OpenDesktop:An easy-to-use and user-friendly Linux distribution for desktops and servers Fully Automated Nagios: A CentOS-based Linux distribution integrated with tools for network monitoring Untangle: A Debian-based Linux distribution that can act as your network gateway

4 | may 2013

OSFY Facebook
Rahul chopRa


Editorial, Subscriptions & Advertising

Delhi (hQ) D-87/1, Okhla Industrial Area, Phase I, New Delhi 110020 Phone: (011) 26810602, 26810603; Fax: 26817563 E-mail: BeNGAlURU Ms Jayashree Ph: (080) 25260023; Fax: 25260394 E-mail:

Customer Care Back Issues

e-mAil: Kits n Spares New Delhi 110020 Phone: (011) 26371661-2 E-mail: Website:

08 You Said It... 12 Q&A Powered By 14 New Products 17 Offers of the Month 18 Open Gadgets

22 28 65 107 108 110

FOSSBytes Innovation Editorial Calendar FOSS Jobs Tips & Tricks Events

CheNNAi Saravana Anand Mobile: 09916390422 E-mail: hYDeRABAD Saravana Anand Mobile: 09916390422 E-mail: KolKAtA Gaurav Agarwal Ph: (033) 22294788; Telefax: (033) 22650094 Mobile: 9891741114 E-mail: mUmBAi Ms Flory DSouza Ph: (022) 24950047, 24928520; Fax: 24954278 E-mail: PUNe Sandeep Shandilya; Ph: (022) 24950047, 24928520 E-mail: GUJARAt Sandeep Roy E-mail: Ph: (022) 24950047, 24928520 SiNGAPoRe Ms Peggy Thay Ph: +65-6836 2272; Fax: +65-6297 7302 E-mail:, UNiteD StAteS Ms Veronique Lamarque, E & Tech Media Phone: +1 860 536 6677 E-mail: ChiNA Ms Terry Qin, Power Pioneer Group Inc. Shenzhen-518031 Ph: (86 755) 83729797; Fax: (86 21) 6455 2379 Mobile: (86) 13923802595, 18603055818 E-mail:, tAiwAN Leon Chen, J.K. Media Taipei City Ph: 886-2-87726780 ext.10; Fax: 886-2-87726787

managing director, Simmtronics Semiconductors Ltd

100 102

Skava is Looking to Hire Smart ProgrammersArish Ali, CEO and co-founder, Skava Inc We are where we are because of open source technology Harsh Jaiswal, founder of Medma Infomatix Pvt Ltd

86 Leaning on LATEX
74 79
Your Career as a 94 Boost Database Administrator

open Gurus
Grub 2 Demystified: A Complete Perspective SLACKWARE: Simple, Straightforward and Stable

Exclusive News-stand Distributor (India)

iBh BooKS AND mAGAziNeS DiStRiBUtoRS Pvt ltD Arch No, 30, below Mahalaxmi Bridge, Mahalaxmi, Mumbai - 400034 Tel: 022- 40497401, 40497402, 40497474, 40497479, Fax: 40497434 E-mail: Printed, published and owned by Ramesh Chopra. Printed at Tara Art Printers Pvt Ltd, A-46,47, Sec-5, Noida, on 28th of the previous month, and published from D-87/1, Okhla Industrial Area, Phase I, New Delhi 110020. Copyright 2013. All articles in this issue, except for interviews, verbatim quotes, or unless otherwise explicitly mentioned, will be released under Creative Commons Attribution-Share Alike 3.0 Unported License a month after the date of publication. Refer to for a copy of the licence. Although every effort is made to ensure accuracy, no responsibility whatsoever is taken for any loss due to publishing errors. Articles that cannot be used are returned to the authors if accompanied by a self-addressed and sufficiently stamped envelope. But no responsibility is taken for any loss or delay in returning the material. Disputes, if any, will be settled in a New Delhi court only.

For You & me

89 92 98
"Linux jobs are on the rise" Ralf Flaxa, vice president, Engineering, SUSE Mathematics Made Easy With Minimal Octave For modern day tablets and smartphones, Android has become a default Indrajit Sabharwal,

36 44
Exploring Software: Network Deployment and Alternative OSs CodeSport

LEADING PLAYERS A List Of Mobile (Android) Apps Development Providers


SUBSCRIPTION RATES PeriodNews-stand price

You Pay Year Five Three One Overseas (`) 6000 3600 1200 (`) 3600 2520 960 US$ 120

Kindly add ` 50/- for outside Delhi cheques. Please send payments only in favour of EFY Enterprises Pvt Ltd. Non-receipt of copies may be reported to support@efyindia.comdo mention your subscription number.

6 | may 2013

Maximize Your SAP Investment

SUSE Linux Enterprise Server for SAP Applications is the only operating system optimized for SAP software solutions which helps organizations reduce risk, cost and complexity in the most efficient way
Running the number-one Linux platform for SAP applications has given thousands of customers exciting benefits. This has been possible due to the strong and proven relationship that exists between SUSE and SAP where tightly integrated products as well as comprehensive operating system service and support options has allowed organizations to integrate, secure and manage information assets and thus reduce the complexities and costs in their business operations. SUSE AND SAP CLOUD More and more SAP customers, from small companies to enterprises, are beginning to use the cloud for parts of their IT infrastructures. SAP is responding to this trend with complete line of cloud-based business and collaboration products. In addition, SAP customers can now deploy their SAP solutions on Amazon EC2 in production and non-production environments. Amazon EC2 running SUSE Linux Enterprise Server is a proven platform for development, test and production of SAP workloads and provides an easily manageable cloud environment for SAP customers. SUSE AND SAP STREAMWORK ENTERPRISE EDITION The enterprise version of SAP Streamworks bridges the gap between peoples demands for easy-to-use, quickly available software and access to enterprise information, and ITs challenges in delivering enterprise data to cloud applications securely. SAP chose SUSE Linux Enterprise Server as the operating system platform for this collaborative, decision-making solution that brings people and information together. SUSE Linux Enterprise is optimized for physical and virtualized SAP applications and, in combination with the SUSE Studio, is the platform of choice for SAP appliances. More information on: enterprise SUSE AND SAP HANA SUSE Linux Enterprise Server was selected for use with SAP HANA, a flexible, multipurpose, data-sourceagnostic, in-memory appliance that combines SAP software components optimized on most major hardware platforms. With SUSE Linux Enterprise Server supporting SAP HANA, SAP customers can further maximize the value of implementing leading-edge technology in Linux environments. Leading SAP hardware partners selected SUSE Linux Enterprise Server for SAP Applications. More information on: promo/saphana.html SUSE HIGH-AVAILABILITY SOLUTIONS FOR SAP CUSTOMERS Many critical business environments require the highest-possible SAP application availability. SAP comes with some basic redundancy mechanisms out of the box. However, for full high availability, SAP relies on third-party high-availability cluster solutions designed to cover all components in the SAP solution stack. SUSE Linux Enterprise Server for SAP Applications includes high-availability components that provide an integrated clustering solution for physical and virtual Linux deployments, enabling customers to implement highly available Linux clusters and eliminating single points of failure. More information on: products/sles-for-sap/ With more than 3,500 empowered customers worldwide, the SAP Solution Manager provides for seamless, integrated enterprise 24-hour support from SUSE and SAP. Today, over 70 percent of SAPs Linux customers use SUSE Linux Enterprise Server.

ITC, ABP, Caf Coffee Day and Mahindra Retail are some of the large enterprises who use SUSE for SAP in India.
SUSE is one of the 4 Business Units of The Attachmate Group, the others being Novell, NetIQ and Attachmate. Backed by more than twelve years of technology and engineering collaboration, it is the first validated open source high-availability solution and is the winner of SAP Pinnacle Awards 2008 for Technology Co-Innovation. For more information, visit, follow us on Twitter @susesapalliance or contact us at:
For more details: The Attachmate Group, Laurel, Block D, 65/2, Bagmane Tech Park, C.V. Raman Nagar, Byrasandra Post, Bangalore (Karnataka); Phone: 80-40022300

Our relationship with SUSE demonstrates the power of co-innovation and is a strong example of how SAPs ecosystem of industry-focused and community-powered partners delivers value to our customers.
INGO BRENCKMANN Program Manager, Data and Analytic Engines, SAP

Include both 32-bit and 64-bit software on the bundled DVD
I have been reading LFY (now OSFY) for a very long time. Let me congratulate you on providing readers an excellent platform to learn Linux and open source software. It will not be an exaggeration to say that I have learnt a lot by reading your magazinemuch more than from books. I am sure there are a lot of open source fans who love the content of OSFY. Please keep up the good work. I also wanted to mention that there is always a dispute on whether it should be 32-bit or 64-bit software on the CD/DVD that you bundle with the magazine every month. Is it not possible to have both 32-and 64-bit software on a single DVD? This would be indeed welcome. S Auluck, ED: Thanks for the wonderful feedback! Such enthusiastic mails help us to do even better. Yes, it is definitely feasible to have both 32-bit and 64-bit on a single DVD and, in fact, it is something we have done before. For example, last year, our May 2012 edition with the Gentoo live DVD had both 32-and 64-bit software. So was the case with our June 2012 UbuntuDVD. We have taken note of your suggestion. Till then, keep reading OSFY and continue sending us your valuable inputs. not worked out. Perhaps, I need to spend more time on iy. In the light of these thoughts and suggestions, for a long time, I have felt that most of the articles are geared towards only developers. But what about students, newcomers or hobbyists who would like to know about new developments or use the computer for fun? Apart from the 'Regular Features', there were only two articles A Look at the Basics of LVM and Kickstarting Virtualisation with Virtual Box in the last issue that were useful for the ordinary reader. Perhaps, I am from a minority. However, you may like to solicit the views from other readers on this matter. I shall remain a subscriber of OSFY till the end. In fact, yours is the only magazine I subscribe to nowadays. Earlier, I used to receive half a dozen magazines every month. V S Nagasayanam, ED: Let me take this opportunity of thanking you for going through our magazine and sending us such insightful feedback. It also feels good to know that the Linux flavours that we bundle with OSFY have been useful to you. So, let me answer your queries, one by one. 1) We would suggest that you post this query on our Facebook page OpenSourceForU?ref=tn_tnmn#!/OpenSourceForU and you would get a lot of suggestions from our FB community. Else, you can contact the Linux User Groups and put forth your query. We are sure this will help you too. 2) Our April 2013 issues DVD on OpenSuse 12.3 had the details about the basic installation of the DVD. You can have a look at it. 3) Yes, you do have a valid point about the magazine needing to cater to a wider audience. We try our best to feature articles for beginners too. It's all about balancing everyone's interests, in our editions. Thanks again for sending us a great feedback. Keep them coming.

A wish-list for OSFY

I recently received the current issue of OSFY. And Id like to say a few words: 1. As a regular reader of your magazine, right from Issue 1, I have been trying the various flavours of Linux. They helped me a lot professionally. Mine is an old computer with 810 VGA. Except for Fedora and Ubuntu, I could not run the distros most of the time. I had problems even with Mint. It is surprising that such a vibrant Linux community is not able to provide a simple VGA driver so that booting in Linux is possible just like in Windows. Although I am not qualified to comment on such a knowledgeable group, I wonder whether it is really that difficult to add such a simple driver. 2. Second, I tried PC Linux OS and Arch. I could run the former only in 'safe' mode, which stopped at the terminal. When I tried Arch Linux from the USB stick (after running the 'dd' command in Ubuntu), all I saw was a text file. Obviously, I need to install it step-by-step, which is somewhat risky unless one has a separate HDD for this purpose. And besides, this will be time-consuming. In one of my earlier letters, I requested that a simple write-up on the DVD's contents would be helpful. But sadly, this hasnt appeared in the magazine. Such an article would have been useful. My experiment with 'Snowlinux' has
8 | May 2013

Loved reading 'Will certifications related to FOSS...'

I was really impressed with the article 'Will certifications related to FOSS help you get a job?' published in the April 2013 issue of OSFY. I feel that FOSS beginners and students should read it once, as they start their career in the open source domain. It was interesting to read the write-up as it had a bunch of suggestions from Linux pioneers and open source communities. Looking forward to more such articles in your forthcoming issues. T Nagaraja,

The Only IT Magazine Thats Read By Software Developers & IT Administrators


Open Source For You

(Formerly LINUX For You)

Worlds #1 Open Source Magazine

Get Noticed! Advertise Now! Contact Omar @ +9958881862
EFY Enterprises Pvt Ltd D-87/1, Okhla Industrial Area, Phase 1, New Delhi 110 020

ED: Thanks a lot for writing to us and letting us know that you love reading OSFY. If you wish to have a detailed insight into Python, you can refer to http:// You will find everything you wish to know about it. Hope this helps. We cover a lot of career-related topics in our magazine and you can go through them. With respect to programming as a career, you can have a look at our October 2012 issue, in which we covered careers in PHP, which is again a sought-after terrain.

ED: Thanks! Its great to hear that the article was inspiring. The story definitely couldn't have happened without the valuable inputs of the LUG members and our Facebook community. We aim to reach out to more members of both the communities and come up with more such articles in future editions.

A career in programming
I have been reading OSFY since last year and I like the content. I would really appreciate it if you let me know what the actual use of Python is as a programming language. As per my understanding, it is generally used for developing games. Please tell me more about this topic. I am planning to make a career in programming. Can you suggest any courses? Akshay Mukadam, An issue on Raspberry Pi

Stop the extra packaging of OSFY

Harshdeep Sokhey: A request. Can we have a complete edition dedicated to Raspberry Pi? That would be great! Raspberry Pi has been generating a buzz in the last one year. So it will be great if we could have a complete edition dedicated to it. Open Source For You: Thanks a lot for the suggestion. We agree that Raspberry Pi is definitely the rage these days. Though we may not dedicate a complete edition to it, we have featured a number of articles on it in our previous editions.

Prasad Cholakkottil: I have been a regular reader of your magazine for several years now. I have a request-please stop using the extra plastic cover that you started shipping with the magazine since the last issue. It took a whole minute for me to get the magazine out of the packing. The tapes are messy. Why not keep things simple? I am not going to stop reading the magazine, though! I appreciate all your efforts and it's a great magazine to subscribe to. Keep up the good work! Open Source For You: Hi Prasad, we
are happy to know that you like reading OSFY and find the content interesting. Thanks a lot for the compliment. The extra packaging is a temporary move to reach out to those who are not aware of OSFY. We are doing this in an attempt to build a wider OSFY community.

Share Your

Please send your comments or suggestions to:

The Editor D-87/1, Okhla Industrial Area, Phase I, New Delhi 110020 Phone: 011-26810601/02/03, Fax: 011-26817563, Email:

10 | May 2013
Trained participants from over 43 Countries in 6 Continents
Linux OS Administration & Security Courses for Migration LLC102: Linux Desktop Essentials LLC033: Linux Essentials for Programmers & Administrators LLC103: Linux System & Network Administration LLC203: Linux Advanced Administration LLC303: Linux System & Network Monitoring Tools LLC403: Qmail Server Administration LLC404: Postfix Server Administration LLC405: Linux Firewall Solutions LLC406: OpenLDAP Server Administration LLC408: Samba Server Administration LLC409: DNS Administration LLC410: Nagios - System & Network Monitoring Software LLC412: Apache & Secure Web Server Administration LLC414: Web Proxy Solutions Courses for Developers LLC104: Linux Internals & Programming Essentials LLC106: Device Driver Programming on Linux LLC108: Bash Shell Scripting Essentials LLC109: CVS on Linux LLC204: MySQL on Linux LLC205: Programming with PHP LLC206: Programming with Perl LLC207: Programming with Python LLC208: PostgreSQL on Linux LLC504: Linux on Embedded Systems LLC702: Android Application Development RHCE Certification Training RH124: Red Hat System Administration - I RH134: Red Hat System Administration - II RH254: Red Hat System Administration - III RH299: RHCE Rapid Track Course RHCVA / RHCSS / RHCDS / RHCA Certification Training RHS333: Red Hat Enterprise Security: Network Services RH423: Red Hat Enterprise Directory Services & Authentication RH401: Red Hat Enterprise Deployment & Systems Management RH436: Red Hat Enterprise Clustering & Storage Management RH442: Red Hat Enterprise System Monitoring & Performance Tuning RHS429: Red Hat Enterprise SELinux Policy Administration RH318: Red Hat Enterprise Virtualization NCLA / NCLP Certification Training Course 3101: SUSE Linux Enterprise 11 Fundamentals Course 3102: SUSE Linux Enterprise 11 Administration Course 3103: SUSE Linux Enterprise Server 11 Advanced Administration

Linux Learning Centre 14th Anniversary Special Offer

Training + Certification Exam Bundle Offers from 01 May to 30 June 2013. Call or Email for Details

RHCVA / RHCSS / RHCA Training - Exams

RH318: 13, & 20 May 2013; EX318: Call; RHS333: 13 May; EX333: 27 May; RH423:20 May; EX423: 28 May; RH401: 6 May, EX401:10 May; RH436: 13 May; EX436: 17 May RH442: 20 May; EX442: 24 May

RH199/299 from 6, 13, 20, 27 May EX200/300 Exam 10, 17 & 24 May
LLC - Authorised Novell Practicum Testing Centre NCLP Training on Courses 3101, 3102 & 3103

CompTIA Storage+ & Cloud+ Training & Certification Anniversary Special Offer
Microsoft Training Co-venture: CertAspire
Microsoft Certified Learning Partner
For more info log on to: Call: 9845057731 / 9449857731


RHCSA, RHCE, RHCVA, RHCSS, RHCDS & RHCA Authorised Training & Exam Centre


Registered Office: # 635, 6th Main Road, Hanumanthnagar, Bangalore 560019

# 2, 1st E Cross, 20th Main Road, BTM 1st Stage, Bangalore 560029. Tel: +91.80.22428538, 26780762, 65680048 Mobile: 9845057731, 9449857731, 9343780054 Powered By

Suresh Dharavath:

How to convert .pdf to .odt(Libre Office Writer) in Ubuntu?

Like . comment

Ankit Jain:
Like . comment

Hello, I want to install Redhat with Windows 7, but not in VMware. How to install the same? Please help.

Riya Patankar: Tricky one, try Okular (it's tricky to

preserve the formatting). Or try libreoffice-pdfimport, or pdfedit.

Suresh Dharavath: Yes, I installed Okular but I am Riya Patankar:

Copy and paste!

Riya Patankar: Maybe this will help. It's hard to create good quality video while installing. http://www. Wind ows_in_Dual_Boot_Environment. Nickton Dias:
Like . comment

not able to understand how to convert the .pdf to .odt? Can anybody help me in loading drivers for my Wireless Adapter in Dell Inspiron 3421 for Ubuntu 12.10?

Er Avishek Kumar: Very easy! In RHEL, there is a Suresh Dharavath: Yeah, I did it but all the information got corrupted.

command called convert, Run convert from /location/file1.pdf to /location/file1.odt. Hope this helps.

Riya Patankar: Either run an update, or install

the specific package for the hardware. (Try to find the make and model of the wireless adapter.)

Er Avishek Kumar:

It all depends on your file type and size. Check and convert your file online there, if you have luck.

Nickton Dias: Hey Riya..I tried the update :) ...but it was'nt any good...Hardware is a Dell Wireless 1704 802.11b/g/n (2.4GHz) [ I suppose its broadcom sourced chip, not sure]..the hardware has been released post the ubuntu 12.10 release....What to do next? Any tip? Riya Patankar: On fed base machine, we search for
*firmware* and then install the appropriate firmware (from repository). Try to search for firmware with aptcache search *firmware* and then install the appropriate one by running apt-get install xxx -firmwareyyy.

Mohd Rafeeq Siddiquie:

Like . comment

Can anyone please tell me if Google is a cloud or not?

Riya Patankar: I think Google drive is a cloud. You can upload and download from anywhere. Jeet Singh: Yup, you can download Google drive
for your desktop too. both of you. But how technically can we define it?

Nickton Dias: Just tried that...nothing changed significantly! Any other suggestion? Riya Patankar: After that reboot it, or upgrade the
kernel and then reboot.

Mohd Rafeeq Siddiquie: Riya & Jeet, I agree with Niraj Kumar Jha: YES..... See cloud doesn't

Pritesh Patankar: Ubuntu 12.10 have all wire-

mean accessing something over the web. It's just an on-demand subscription of resources for hosting content (may be anything like RAM, Disk space etc.) The way cloud serves our purpose is, it has a set of connected systems which serves our purpose. TO BEST OF MY KNOWLEDGE, GOOGLE USES cloud services. It serves as a cloud environment to the end users as well because you don't know where does your mail\ data reside in the cloud network and still you can use services offered (It's a separate thing that it doesn't charge anything to users like us.)

less drivers but it seems your installation dropped that thing. So try this sudo apt-get install wireless-tools sudo apt-get install linux-headers-generic sudo apt-get install --reinstall bcmwl-kernel-source sudo modprobe wl.

Uttkarsh Tiwari: Yes,

it is.

Jeet Singh: Download Brodcom drivers from brodcom site ( b43 -fwcutter and patch): Open download frile with Ubuntu software center and click install : copy downloaded file in home directory : untar it "tar xfvj file_name " this is for bz2 file : next step "sudo b43-fwcutter -w /lib/ firmware wl_apsta-" ::: next step :: "sudo b43-fwcutter --unsupported -w /lib/firmware broadcomwl-" :: next step now close terminal :: and search additional driver in Unity by press window key and write addit " it will show you additional driver " open it and activate .. it should work.

Image quality is poor as the photos have been directly taken from
12 | MAY 2013
Arush Salil:
I have a huge problem. I was moving some of my folders to my HDD and some how the HDD cable just got plunged disconnecting the HDD in middle of the Transfer. Now I can't find my files anywhere. (Nor in the folder and neither it was in the HDD), whereas when I actually did a "Locate" for the folder, it actually listed the folder in the source place. But there is no such folder there. So where did my data just go? Can I somehow recover it back ?
Like . comment


Riya Patankar: Nothing just create it and set the

flag, recently I had done the same for installing it with Windows 8.

Rajat Khandelwal: Sorry to disturb you! but now its showing "No Partition Bootable In Table". I have have tried both Fedora 32 bit and 64 bit many times but this error appears again and again. When I restarted my PC to run GPARTED it is showing the error "invaild or damaged directory. I don't know why it is happening, tried many times. Riya Patankar: Boot with live USB, and then run gparted, before all this answer one question : Is your machine is having UEFI boot ? if so, then you need to enable legacy boot support. ( I haven't tried Linux with UEFi I tried with legacy only) Rajat Khandelwal: I don't know about UEFI boot. Can you tell me how can I come to know whether my machine is having UEFI boot or not? Riya Patankar: Go to bios and check for the options
in the boot section. Are you running Windows 8?

Ritesh Chaudhari: I think it's possible using


Shivam Sharma: To recover the deleted files,

you can also use Photorec.

Rajat Khandelwal:

I tried to install Fedora 18 Desktop Edition on my desktop but unfortunately it is showing an error "NO BOOTABLE PARTITION FOUND". Please help me!
Like . comment

+ Windows 8. Create a small 1 MiB partition in gpart and change its flag to bios_boot. Then try.

Riya Patankar: I think you are trying on UEFi Rajat Khandelwal: What should I do with the

Rajat Khandelwal: Yes, I am running Windows 8

and my PC is having UEFI boot disabled.

Riya Patankar: That means you are using legacy boot, cross check it. If you are using legacy, then follow bios_boot flag comment. Recently I had done the same with one of the HP laptops.





MAY 2013 | 13

new products

A floor-standing Monster Tower speaker from Zebronics!

If you have an ear for music, this news is for you! Top Notch Infotronix, owner of the brand Zebronics, Indias leading supplier of products and accessories for computers, consumer electronics and communications, has launched its latest model in the Tower Speaker series the floor-standing Monster Tower speaker. The Zebronics Monster Tower or the ZEBBT8000RUC delivers up to 50W RMS of sound in a unique single-box form-factor that dominates the room. The speaker is compatible with Linux, Windows and Mac. A distinctive design feature of the Bluetooth Tower Speaker are the four 7.62 cm (3-inch) drivers on all sides of the box, for a 360-degree surround sound experience complementing a 12.7 cm (5-inch) sub-woofer. The Bluetooth connectivity feature in this model is a first for the Zebronics Sound Monster speaker line-up, and enables wireless input from any playing source with Bluetooth functionality. This new speaker, which we have named Dabang, is the first to feature Bluetooth connectivity. Though a single-box form factor, the four-sided throw of the Dabang speaker produces high resolution audio quality, offering more vigour to any music track all at an extremely affordable price point, said Pradeep Doshi, director, Top Notch Infotronix.

The Lava Iris 455 packs a punch with Jelly Bean OS!
The Lava Iris 455 smartphone is the latest arrival in the Iris family and what distinguishes it from the rest of its competitors is that it runs on the latest Jelly Bean. The phone comes with an 11.4 cm display and has a dual-core processor clocked at 1 GHz along with 512 MB of RAM. The phone is fitted with a 5 MP rear camera with a flash and a VGA front-facing snapper for video calling. We caught up with S N Rai, co-founder and director, Lava International Ltd and he said, It becomes challenging for us to cater to those consumers who are looking for a higher performance in their devices. The smartphone has a very stable performance chip-set. Lava has a large distribution base and offers value for money in its products, so we hope users will like it.

Price: ` 7,799

Address: LAVA International Ltd, A-56, Sector 64, Noida 201301; Ph: 0-120-4637100; E-mail:; Website:

Asus India launches 32 GB Google Nexus 7

Asus India recently announced the availability of the much awaited ASUS Nexus 7 tablet in India. The 32 GB version of the 17.78 cm tablet is available in both Wi-Fi-only and 3G variants at all Asus stores. The tablet comes with a 17.7-cm IPS display and runs Android 4.1. The slate is powered by a 1.2 GHz quad-core Nvidia Tegra 3 processor, coupled with 1 GB RAM. Peter Chang, regional head, South Asia and country manager, Systems Business Group, ASUS India, said, We believe that ASUS Nexus 7 will bring us even closer to the persistent perfection that we set out to achieve and offer to our consumers.

Address: Asus Technology Pvt Ltd, 4C, Gundecha Enclave, Kherani Road, Near Sakinaka Police Chowki, Sakinaka, Andheri-E, Mumbai 400072; Ph: 022 67668800; E-mail:; Website:

Price: ` 18,999 for 32 GB version of Nexus 7 Wi-Fi tablet, ` 21,999 for Nexus 7 3G tablet with 32 GB storage

Now, a keyboard for Android tablet users!

RAPOO has launched an ultra-slim Bluetooth keyboardthe RAPOO E6500 for Android tablet users. The E6500 has 13 Android hotkeys for media control, and is compatible with Android tablets and smartphones. Sunil Srivastava, India sales and marketing manager at RAPOO Technologies, said, This is designed to support developers as they enhance the working experience and enrich the entertainment quotient of Android devices."

Price: ` 4,999

Address: Top Notch Infotronix, 6c Valliamal Road, Vepery, Chennai; Email:; Website:

Price: Rs 4,094

Address: RAPOO Technologies Limited, D-1, TF, Shyam Bhawan, Plot No 514 and 516, Zhenda Colony, Fatehpur Beri, Asola Extension, New Delhi 110074; Ph: +9198999-94802; Email:; Website:

14 | may 2013


new products

The first convertible tablet-laptop launched by WishTel

WishTels first convertible tablet laptop, the IRA Capsule, features a 25.6cm (10.1-inch) LED multi-touch screen and runs the latest Jelly Bean OS. The device is powered by a 1.6 GHz dual-core processor, coupled with 1 GB RAM. The hybrid tablet sports a 5 MP rear camera and 0.3 MP front-facing snapper for video calls. With its chic looks, the device also comes with a powerful 8,000 mAh battery. Milind Shah, chief executive officer, WishTel, said, We wanted to give our consumers an enhanced mobile computing integrated product and so we came up with IRA Capsule. This product not only offers a long battery life but is also meant to enrich the viewing and learning experience.

Price: ` 16,000

Address: WishTel Pvt. Ltd, 4, Champaklal Udyog Bhavan, Sion (East), Mumbai 400022; Ph: 022-30010700; Email:; Website:

WickedLeak Wammy Titan 2 phablet hits markets and gets Android 4.2 update
WickedLeak has come up with yet another Android phabletthe Wammy Titan 2. The news that will excite all phablet fans is that the company has recently announced an Android 4.2 update for the quad-core phablet. The Titan II sports a 13.4-cm (5.3-inch) qHD display and runs Android 4.1 (Jelly Bean) with an Android 4.2 upgrade promised by the company. The phone is powered by a 1.2 GHz MediaTek quad core processor, coupled with 1 GB RAM. The Titan II also comes fitted with a 12 MP rear camera and a big 5 MP front-facing shooter. It comes with an internal storage capacity of 4 GB, with a microSD card slot. Shares Aditya Mehta, CEO, WickedLeak, We are the first company to give the Android 4.2 update in the budget space. Android lovers crave to experience the best and the latest, and they dont always get it, so we decided to power our product with 4.2.

Price: ` 13,990

Address: Wicked Leak Inc, Aditya Villa, Waman Wadi, S.T. Road, Chembur, Mumbai 400071; Ph: 65017532; Website:

LG brings out Optimus L3 II Dual

The LG Optimus brigade got a shot in its arm with the launch of the new Optimus L3 II Dual in India. According to LG Electronics, the successful L Series has evolved to include contemporary design aesthetics and innovations that improve on the original. The LG Optimus L3 II Dual sports an 8.1 cm (3.2 inch) IPS display and comes powered with a single-core 1 GHz processor. It packs in a 1,540 mAH battery and runs the latest Android Jelly Bean. The dual SIM device also has a 3 MP rear camera and 4 GB of internal storage. The smartphone comes loaded with features like Smart Forwarding, which helps manage calls between both the SIMs. As we caught up with Amit Gujral, head, Marketing, LG Mobiles, he enthused, There is a growing need among consumers for a smartphone that takes care of their daily convergence needs. The LG Optimus L-Series II smartphones address those very needs by providing ample features like full IPS display, the latest version of Android (Jelly Bean), an excellent camera and a host of useful apps like Quick Translator, Quick Memo, QSlide, Cheese Shutter Camera, etc, and of course, loads of apps from the Google Play Store. Address: LG Electronics India, Plot Number 51, Udyog Vihar, Surajpur-Kasna Road, Greater Noida 201306; Ph:0120- 2560900; E-mail:; Website:
16 | may 2013

Price: ` 8,880 ffer THE S monTH

Free usage credit of


Get 1 month
Cloud Website Hosting @ Affordable Price
Sign up now and get Rs. 500 usage credit! Coupon Code :- eNlightMay
For more information, call us on 1800-209-3006/+91-253-6636500 or email us at


Get 1 month Free

Fully Managed Enterprise Class Dedicated Server or VPS Intel Xeon Quad Core, 16GB RAM, 2 x 1TB HDD in Indian Data Center with FREE Plesk Control panel

watch Offer to is out th 3 May1

Hurry!d till ali 3! Offer v ay 201 30th M

*On pre-payment of six months.

Call us at +91 (22) 6781 6699 or write to & mention COUPON CODE: OSFY

Learn Shell Scripting for free!

Get to learn Shell Scripting for Free
Get trained in RHCE & RHCVA at "India's Most Loved Linux Training Company" & get to learn Shell Scripting for free.
Hurry!d till ali 3! Offer v ay 201 30th M

Partner with a Global Brand! Get 50% off on franchise fees! Be your own boss with significant earning potential
Hurry!d till ali 3! Offer v ay 201 30th M

Contact Ms Shivani Sharma on +91-9310024501 or write to & mention COUPON CODE: OSFY

Contact us at +91-98868 79412 / +91-80-42425000 or write to

Free Registration

Special offer for OSFY readers From WORLD HOSTING DAYS
WHD India is to be held on 27-28 May 2013 at Renaissance Mumbai Convention Centre Hotel. For free registration, Use Code: M1WHR95

Discount & more


20% discount for RHCE training & Special offers for all other Red Hat Certifications
Get yourself registered now!

Hurry!d till ali Offer vay2013! 25th M

Hurry!d till ali 3! Offer v ay 201 30th M

Contact : Tel: 0484-2366258,0481, Mob: 09447294635, 09447169776 Email: & mention coupon code: OSFYAPRIL13

To Advertise Here, Contact Omar on 995 888 1862 or 011-26810601/02/03 or write to
Videocon VT75C

Karbonn Smart Tab TA-FONE A37


WishTel IRA Capsule


Datawind Ubislate 7C+ Edge


Android 4.1 aka Jelly Bean

Launch Date: MRP: ESP:

Android 4.1
Launch Date: MRP:

April 2013 ` 6,499 ` 5,990


Android 4.0
Launch Date: MRP:

March 2013 ` 16,000

Android 4.0
Launch Date: MRP:

April 2013

March 2013


` 7,990

17.7-cm (7-inch) display touchscreen, 1600 x 1200 pixels screen resolution, 1 GHz processor, 512 MB RAM, 3,000 mAh battery, 2 MP rear and 0.3 MP front-facing camera, 4 GB internal memory, expandable memory up to 32 GB, 3G via dongle, WiFi

` 7,290



` 16,000


` 5,999


7-inch capacitive touch screen, 800 x 480 pixels screen resolution, 1 GHz processor, 512 MB RAM, 3000 mAh battery, 2 MP rear camrea, 0.3 MP (VGA) front-camera, 4 GB internal memory, expandable memory up to 32 GB, 3G, Wifi

10.1 inch LED multi touch capacitive touchscreen, 1024 x 786 pixels screen resolution, 1.6 GHz dual core processor, 1GB RAM, 8000 mAH battery, 5 MP rear and 0.3 MP front camera, expandable memory up to 32 GB, 3G, Wifi

` 5,999
7 inch capacitive touchscreen, 800 x 480 pixels screen resolution, 1 GHz processor, 512MB RAM, VGA secondary camera, 4 GB internal memory, expandable memory up to 32 GB, 2G, Wifi

iBall Edu-Slide

Lava E-Tab Connect


Salora Protab HD

Salora Protab

Android 4.1 aka Jelly Bean

Launch Date: MRP: ESP:

Android 4.1 aka Jelly Bean

Launch Date: MRP: ESP:

Android 4.1 aka Jelly Bean

Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP:

March 2013 ` 14,999 ` 12,999

25.6-cm (10.1-inch) touchscreen, 1280 x 800 pixels screen resolution, 1.5 GHz dual-core processor, 1 GB RAM, 2 MP rear and VGA front-facing camera, 8 GB internal storage, 3G, Wifi

March 2013 ` 9,499 ` 9,499

7-inch screen with WVGA capacitive touchscreen, 2. 1 GHz Qualcomm processor, 512MB RAM, 3,000 mAh battery, 2 MP rear camera, 4 GB internal storage, expandable up to 32 GB, 3G, Wifi

March 2013 ` 6,599 ` 5,499

7 inch LCD capacitive touchscreen, 1024 x 600 pixels screen resolution, 1.2 GHz processor, 1 GB RAM, 0.3 MP front camera, 3200 mAh battery, 4 GB internal memory, expandable up to 32 GB, 3G, WiFi

March 2013 ` 6,199


` 4,999
7 inch LCD capacitive touchscreen, 480 x 800 pixels screen resolution, 1.5 GHz processor, 0.3MP front-facing camerafor video calling, 512 MB RAM, 3200 mAh battery, 4GB internal memory, expandable up to 32 GB, 3G, WiFi

Zync Quad 8.0


Zync Quad 9.7


Swipe Halo

Simmtronics XPad X1010


Android 4.1 aka Jelly Bean

Launch Date: MRP: ESP:

Android 4.1 aka Jelly Bean

Launch Date: MRP: ESP:

Android 4.1 aka Jelly Bean

Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

March 2013 ` 12,990 ` 12,990

8-inch capacitive touchscreen, 1024 x 768 pixels screen resolution, 1.5 GHz processor, 2 GB RAM, 5800 mAh battery, 5 MP rear camera, 2 MP front camera, 16 GB internal memory, expandable up to 32 GB, 3G, WiFi

March 2013 ` 13,990 ` 13,990

9.7-inch screen with an LED-backlit Super HD IPS touchscreen, 2048 x 1536 pixels screen resolution, 1.5 GHz processor, 2 GB RAM, 8,000 mAh battery, 5 MP rear camera, 2 MP front camera, 16 GB internal memory, expandable up to 32 GB, 3G, WiFi

March 2013 ` 6,990 ` 6,990

17.7-cm (7-inch) TFT LCD multitouch capacitive touchscreen, 1.5 GHz processor, 512 MB RAM, 3,400 mAh battery, 2 MP rear camera, 0.3 MP front camera, 2G, WiFi

February 2013 ` 8,399 ` 8,399

10.1-inch capacitive touchscreen, 1024 x 600 pixels screen resolution, 1.2 GHz processor, 5,600mAh battery, 0.3MP front-facing camera, 8 GB internal memory, expandable up to 32 GB, 3G, Wifi

Lava E-Tab Xtron


Champion Computers Wtab 705 Talk


Videocon VT 71

NXG Xtab A9 Plus


Android 4.1 aka Jelly Bean

Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

Android 4.0
Launch Date:

February 2013 ` 6,499 ` 6,499

7-inch IPS multi touchscreen, 1024 x 600 pixels screen resolution, 1.5 GHz dual core processor, 3500 mAh battery, 2 MP rear camera, 8 GB internal memory, expandable up to 32 GB, WiFi

Android 4.0
Launch Date: MRP: ESP:

January 2013 ` 4,799 ` 4,529

7-inch (17.8 cm) capacitive touchscreen, 800 x 480 pixels screen resolution, 1.2 GH processor, 3200 mAh battery, 4 GB internal memory, expandable up to 32 GB, 3G, Wifi

January 2013

February 2013 ` 6,330 ` 6,330

17.8 cm capacitive touchscreen, 480 x 800 pixels screen resolution, 1.5 GHz processor, 4 GB internal memory, expandable memory up to 32 GB, it has built-in support for 2G network

` 8,990 ` 6,990
17.7-cm (7-inch) IPS display touchscreen, 1.2 GHz processor, 3,600 mAh battery,2 MP rear and VGA front-facing camera, 8 GB internal storage, expandable via microSD, 3G via dongle, WiFi

18 | may 2013



Spice Stellar Pad Mi-1010


Lava eTab Z7H


Karbonn Smart Tab 8 Velox


Android 4.0
Launch Date: MRP: ESP:

Android 4.1 aka Jelly Bean

Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

Android 4.1 aka Jelly Bean

Launch Date: MRP: ESP:

January 2013 ` 7,999 ` 7,899

7 WVGA capacitive touch screen,800 x 480 pixels screen resolution, 1 GHz cortex processor,3200 mAh battery, 2 MP rear camera 0.3 MP front camera, 4 GB internal memory, expandable up to 32 GB, 3G, WiFi

January 2013 ` 7,600 ` 7,600

25.6-cm (10.1-inch) IPS display touchscreen, 1.5 GHz dual-core processor,7,600 mAh battery, 3 MP rear and VGA front camera, internal storage of 16 GB, expandable up to 32 GB, 3G, Wifi

January 2013 ` 5,499 ` 5,499

Specification: 17.7-cm
(7-inch) capacitive display touchscreen, 1 GHz processor, 2,800 mAh battery, 0.3 MP front camera, 4 GB internal storage, expandable up to 32 GB 3G, Wifi

January 2013 ` 7,025 ` 7,025

20.3-cm (8-inch) capacitive display touchscreen, 1024 x 768 pixels screen resolution 1.5 GHz dual-core processor, 4,500 mAh battery, 3 MP rear and VGA front camera, 1.5 GB internal storage, expandable upto 32 GB, WiFi

iBerry Auxus CoreX2


iBerry Auxus CoreX4


Videocon VT10

Simmtronics XPAD X-720


Android 4.1 aka Jelly Bean

Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

Android 4.1 aka Jelly Bean

Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

December 2012 ` 10,990 ` 10,990

17.7-cm (7-inch) IPS display touchscreen, 1.6 GHz dual-core processor, 4,100 mAh battery, 2 MP rear and 0.3 MP front camera, 8 GB internal storage, expandable up to 64 GB, 3G, Wifi

December 2012 ` 15,990 ` 15,990

24.6-cm (9.7-inch) IPS display touchscreen, 1.6 GHz quad-core processor, 7,200 mAh battery, 2 MP rear and VGA front camera, 16 GB internal storage, expandable up to 64 GB, 3G, Wifi

December 2012 ` 11,200 ` 11,200

25.6-cm (10.1-inch) IPS display touchscreen, 1280 x 800 pixels screen resolution, 1.5 GHz dualcore processor, 6,800 mAh battery, 2 MP front and rear camera, 8 GB internal storage, expandable up to 32 GB, 3G, WiFi

December 2012 ` 4,600 ` 4,600

17.7-cm (7-inch) capacitive display touchscreen, 1 GHz processor, VGA front camera, 2,800 mAh battery, 4 GB internal memory, expandable up to 32 GB, Wi-Fi, 3G via dongle

Lenovo Ideapad A2107


Ambrane Mini

Acer Gateway NE56R


Dell Vostro 2520


Android 4.0
Launch Date: MRP: ESP:

December 2012 ` 13,999 ` 13,999

17.7-cm (7-inch) HD display touchscreen, 1024 x 600 pixels screen resolution, 1 GHz Mediatek processor, 2 MP rear and 0.3 MP front camera, 16 GB internal memory, expandable up to 32 GB 3G, WiFi

Android 4.0
Launch Date: MRP: ESP:

Launch Date: MRP: ESP:

Launch Date: MRP: ESP:

November 2012 ` 5,499 ` 5,034

Specification: 7 inches TFT capacitive
touch screen, 800 x 480 pixel screen resolution, 1.2 GHz processor, 3000 mAh battery, Built-in 0.3 MP camera, WiFi

December 2012 ` 22,699 ` 20800

Specification: 15.6 inch TFT LCD display screen, 1366 x
768 pixels screen resolution, 2.1 GHz Intel Pentium processor, 2 GB memory, expandable up to 8 GB, DVD SuperMulti Drive with dual layer support, 500 GB hard disk storage capacity, 2.6 kg weight

January 2013 ` 33500 ` 27499

15.6 inch HD WLED Anti-Glare Display, 1366 x 768 pixels screen resolution, Core i3 (2nd Generation) processor, 2 GB DDR3 memory, expandable up to 8 GB, Intel HD Graphics 3000, 500 GB hard disk capacity, 2.36 kg weight., Email:

Your Window to FREE professional Software


Buy Hardware

Download Software FREE

Hire Consultant
Get complete configuration & Get Expert Support

The logos used in this banner are the properties of their individual organizations

may 2013 | 19

Adcom Thunder A530
Micromax A72 Canvas Viva

Samsung Galaxy S II Plus I9105


Micromax A91 Ninja


Android 4.1 aka Jelly Bean

Launch Date: MRP: ESP:

Android 2.3
Launch Date: MRP:

Android 4.1 aka Jelly Bean

Launch Date: MRP:

Android 4.0
Launch Date: MRP:

April 2013 ` 12,000 ` 9,990


April 2013 ` 7,999

April 2013 ` 24,000

April 2013 ` 8,999



` 6,499



` 23,099



` 8,999


5.3 Inch multitouch LCD touchscreen, 1.2GHz processor, 512 MB RAM, 2800 mAh battery, 8 MP rear camera and 2 MP front Camera 4 GB internal storage, expandable up to 32 GB, 3G, WiFi

5 inch capacitive touchscreen, 480 x 800 pixels screen resolution, 1 GHz processor, 2000 mAh battery, 256 MB RAM, 3 MP rear camera, memory expandable up to 32 GB, WiFi

4.3-inch Super Amoled Plus display touchscreen, 1.2 GHz dual core processor, 6500 mAh battery, 8 MP rear camera, 16 GB internal memory, expandable up to 32 GB,3G, WiFi

11.4-cm (4.5-inch) TFT display touchscreen, 1 GHz dual-core processor,1,800 mAh battery, 512 MB RAM, 5 MP rear and 0.3 MP front camera, internal memory 4 GB, expandable up to 32 GB, 3G, Wi-Fi

Lava Iris 455


Karbonn Titanium S5

Karbonn A4

Micromax A35 Bolt Ninja


Android 4.1 aka Jelly Bean

Launch Date: MRP: ESP:

Android 4.1 aka Jelly Bean

Launch Date: MRP:

Android 2.3
Launch Date: MRP:

Android 2.3
Launch Date: MRP:

April 2013 ` Below 10,000 Not available

1.4-cm (4.5-inch) display touch screen, 1 GHz dual-core processor,1,500 mAh battery, 5 MP rear camera with Flash, VGA front-facing camera, 512 MB RAM, 4 GB internal storage (2 GB usable), expandable up to 32 GB, 3G, Wifi dual-SIM, Bluetooth, GPS, micro USB

March 2013

March 2013

March 2013 ` 5,499


` 11,999

` 11,299


` 5,290

` 4,685



` 4,249


12.7-cm (5-inch) qHD multi-touch capacitive touch screen IPS display, 960 540 pixels screen resolution, 1.2 GHz quad-core processor, 2000 mAh battery, 8 MP rear camera , 2 MP front camera, 4GB internal memory, expandable up to 32GB, 3G, Wifi

4 inch display touchscreen, 320 x 480 pixels screen resolution, 1 GHz processor, 512 MB RAM, 3.2 MP rear camera, 512 MB internal memory, expandable up to 32 GB, WiFi

4 inch capacitive TFT touchscreen, 480 x 800 pixels screen resolution, 1 G Hz processor, 1500 mAh battery, 256 MB RAM, 2 MP rear camera, 512 MB internal memory, expandable up to 32 GB, WiFi


WickedLeak Wammy Titan 2


Wicked Leak Wammy Passion Y


Karbonn Titanium S5

Android 4.1 aka Jelly Bean

Launch Date: MRP:

Android 4.2
Launch Date: MRP: ESP:

Android 4.1 aka Jelly Bean

Launch Date: MRP:

Android 4.1 aka Jelly Bean

Launch Date: MRP: ESP:

March 2013 Expected to be around `16,000


March 2013 ` 14,990

March 2013

March 2013 ` 11,990 ` 11,990

5 inch QHD multi touch capacitive touchscreen, 540 x 960 pixels screen resolution, 1.2GHz quad core processor, 1 GB RAM, 2000 mAh battery, 8 MP rear camera, 2 MP front camera, 3G, WiFi



` 13,990


` 16,999

` 16,999
12.7-cm (5-inch) HD IPS display touchscreen, 1280 x 720 pixels screen resolution, 1.2 GHz quadcore processor,1 GB RAM, 2,800 mAh battery, 8 MP rear and 2 MP front-facing camera, 4 GB internal storage, expandable upto 32 GB, 3G, WiFi

1.15 GHz4.3 inch LCD display touchscreen, 800 x 480 pixels screen resolution, 1.15 GHz processor, 1 GB RAM, 2100 mAh battery, 5 MP rear camera, 8 GB internal memory, expandable up to 32 GB,3G, WiFi

13.4-cm (5.3-inch) qHD display touchscreen,960 X 540 pixels screen resolution, 1.2 GHz MediaTek quad core processor, 1 GB RAM. 2300 mAh battery.12 MP rear camera, 4 GB internal memory, expandable up to 32 GB, 3G, Wifi

Xolo X1000

Karbonn A6

Gionee Dream D1
Android 4.1 aka Jelly Bean
Launch Date: MRP: ESP: OS:

Sony Xperia Z

Android 4.0
Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

Android 4.1 aka Jelly Bean

Launch Date: MRP: ESP:

March 2013 ` 24,999 ` 19,999

11.9-cm (4.7-inch) TFT LCD touchscreen, 720 X 1280 pixels screen resolution, 2 GHz Intel Atom processor, 1 GB RAM, 1,900 mAh battery, 8 MP rear camera, 1.3 MP front camera, 8 GB internal storage, expandable up to 32 GB, 3G, Wifi

March 2013 ` 5,990 ` 5,390

4.0 inch IPS WVGA touchscreen, 480 x 800 pixels screen resolution, 1 GHz processor,512 MB RAM, 1450 mAh battery, 5 MP rear camera, microSD internal memory, expandable upto 32 GB

March 2013 ` 17,999 ` 17,999

4.65 inch HD Super AMOLED plus, 1280 x 720 pixels screen resolution, 1.2 GHz quad core processor, 1 GB RAM, 2100 mAh battery, 8 MP rear camera, 1 MP front camera, 4 GB internal memory, expandable up to 32 GB, 3G, Wifi

March 2013 ` 38,990 ` 37,990

5 inches (12.7 cm) full HD touchscreen, 1080 x 1920 pixels screen resolution, 1.5 GHz processor, 2 GB RAM, 2330 mAh battery, 13 MP rear camera, 2 MP front camera, 16 GB internal memory, expandable up to 32 GB, 3G, WiFi

20 | may 2013


Sony Xperia ZL

Idea Zeal 3G

Lava Iris 430


LG Optimus L7 II Dual

Android 4.1 aka Jelly Bean

Launch Date: MRP: ESP:

Android 2.3
Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

Android 4.1 aka Jelly Bean

Launch Date: MRP: ESP:

March 2013 ` 36,990 ` 35,490

5 inches (12.7 cm) TFT capacitive touchscreen, 1080 x 1920 pixels screen resolution, 1.5 GHz processor, 2330 mAh battery, 13 MP rear camera, 2 MP front camera,16 GB internal memory, expandable up to 32 GB, 3G, WiFi

March 2013 ` 5,390 ` 5,390

8.8-cm (3.5-inch) touch screen, 320 X 480 pixels screen resolution, 1 GHz processor, 256 MB RAM, 3 MP rear camera, microSD Card slot compatible for cards up to 32 GB, 2G, WiFi

March 2013 ` 7,500 ` 6,000

10.9-cm (4.3-inch) WVGA display touchscreen, 1 GHz dualcore processor, 512 MB RAM, 1,400 mAh battery, 5 MP rear and VGA front-facing camera, 4 GB internal memory, expandable up to 32 GB, 3G, WiFi

Feb 2013 ` 14,990 ` 14,890

4.3 inch IPS display touchscreen, 400 x 800 pixels screen resolution, 1 GHz dual- core processor, 2,460 mAH battery,8-MP rear camera and VGA front camera, 4GB of internal storage,3G, WiFi


LG Optimus G

Micromax A116 Canvas HD


Spice Stellar Buddy Mi 315


Lava IRIS 351


Android 4.1 aka Jelly Bean

Launch Date: MRP: ESP:

Android 4.1 aka Jelly Bean

Launch Date: MRP: ESP:

Android 2.3
Launch Date: MRP: ESP:

Android 2.3
Launch Date: MRP: ESP:

February 2013 ` 34,500 ` 30,990

11.9-cm (4.7-inch) HD display touchscreen, 1.5 GHz quad-core processor, 2 GB RAM, 2,100 mAh battery, 13 MP rear and 1.3 MP front camera, 3G, WiFi

February 2013 ` 15,000 ` 15,000

12.7-cm (5-inch) IPS LCD touchscreen, 1280 X 720 pixels screen resolution, 1.2GHz quad core processor,8MP rear camera, 4GB of internal storage and supports microSD card up to 32GB, GPS with GLONASS and A-GPS, 3G, Wifi

February 2013 ` 3,400 ` 3,400

3.2-inch capacitive touchscreen, 1 GHz processor, 1400 mAh battery, 3.2 MP rear camera, 170 MB internal memory, expandable up to 32 GB, supports 2G network, WiFi

February 2013 ` 3,899 ` 3,899

3.5-inch capacitive touchscreen, 320 x 480 pixels screen resolution, 1 GHz processor, 1300 mAh battery, 2 MP camera, 110 MB internal memory, expandable up to 16 GB, Wifi

Intex AQUA Wonder


Micromax A27 Ninja


Umi X1

Videocon A27

Android 4.1 aka Jelly Bean

Launch Date: MRP: ESP:

Android 2.3
Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

February 2013 ` 9,990 ` 9,990

4.5 inch capacitive touchscreen,1 GHz dual core processor, 960 X 540 pixels screen resolution,1 GHz dual core processor, 1800 mAH battery, 8 MP rear camera, 1.93 GB internal memory, expandable upto 32 GB, 3G, WiFi

February 2013 ` 3349 ` 3349

8.8-cm (3.5-inch) capacitive touchscreen,480 320 pixels screen resolution, 1 GHz processor, 1400 mAh battery, 2 MP rear camera, 160 MB internal memory, expandable up to 16 GB, 2G, WiFi

February 2013 ` 19,999 ` 9,999

11.4-cm (4.5-inch) HD display touchscreen, 1280 x 720 pixels screen resolution, 1 GHz processor, 1,750 mAh battery, 8 MP rear and 2 MP front camera, 3G, Wifi

February 2013 ` 5,999 ` 5,999

4.0-inch WVGA capacitive touchscreen, 1 GHz processor, 512 MB RAM, 1,500 mAh battery, 3 MP rear, front VGA camera,3G, Wifi

Lava Iris 501


Spice Stellar Pinnacle Mi-530


iBall Andi 4.5Q


HTC Butterfly

Android 4.0
Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

Android 4.0
Launch Date: MRP: ESP:

Android 4.1 aka Jelly Bean

Launch Date: MRP: ESP:

January 2013 ` 9,999 ` 9,999

5.0 WVGA touchscreen, 480 x 800 pixels screen resolution, 1 GHz processor, 2300 mAh battery, 512 MB RAM, 5 MP rear camera, memory expandable up to 32 GB, 3G, WiFi

January 2013 ` 13,999 ` 13,999

1.2 GHz dual-core processor, 2,550 mAh battery, 8 MP rear and 5 MP front-facing camera,16 GB internal storage capacity, expandable up to 32 GB, 3G, WiFi

January 2013 ` 11,490 ` 10,990

4-inch capactive touch screen display, 800 480 pixels screen resolution, 1 GHz dual-core Qualcomm processor, 1,500 mAh battery, 5 MP rear camera, 4 GB internal memory, expandable up to 32 GB, 3G, Wifi

January 2013 ` 49,900 ` 45,990

12.7-cm (5-inch) full-HD display touchscreen, 1920 x 1080 pixels screen resolution, 1.5 GHz quadcore processor, 2,020 mAh battery, 2 GB RAM,8 MP rear and 2.1 MP front-facing camera, 16 GB internal storage, expandable via microSD, 3G, WiFi

may 2013 | 21 FOSSBYTES

Facebook introduces 'Home' for Android!

Fuduntu Linux reaches end-of-life!

Here is some bad news for Fuduntu lovers. Developers have announced that they are closing doors on this distro, finally.

Mark Zuckerberg, co-founder and Facebook has always ensured that it devises ways to chief executive, Facebook keep people hooked on its mobile platform software. This time, at an event in the US, the social networking site launched the much awaited Facebook Home for Android. It will change the main screen of the Android phone on which it is installed and will give users access to Facebook services. Mark Zuckerberg, co-founder and chief executive, Facebook, has clearly stated that Home is not a forked Android version; instead, the launcher will work right on top of current Android devices. He further explained that Androids open source nature enables people to develop a custom user interface that will be loaded on the Android devices. Home has been created for the purpose of offering a unique, people-centric homely experience on Android devices, which has currently become more apps centric. There is no point in having a Facebook Phone, when Facebook Home itself can be used by a lot more Android users, Zuckerberg added. In terms of features, Home comes with a new lock and home screens that will be based on ones Facebook page or profile. The new interface has apps like Cover Feed and Chat Heads, and offers a more personalised mobile experience. This means users will be able to go through Facebook content via Cover Feed, interact via SMS and Facebook messaging (with the Chat Heads app), and also dabble with the regular Android apps that would be installed on an Android device from Homes own app launcher. Facebook Home will be available for selected devices like Samsung Galaxy S4, Galaxy S3, Galaxy Note 2, HTC One X, HTC One X+, and the recently launched HTC One smartphones for the time being; more devices will be added in the coming months.

Ubuntu 13.04 Linux better than Apple OS X 10.8.3?

Ubuntu 13.04 is typically known to be competitive with OS X 10.8.3, but new reports suggest that the Ubuntu 13.04 is actually outperforming Apple's operating system on its own MacBook hardware in some areas.
22 | May 2013

Back in 2010, Fuduntu was built as a Fedora-based Linux distribution; however, it was forked soon afterwards. The project team explained that they have been trying to offer a desktop experience somewhere between Fedora and Ubuntu. September 30, 2013, will be the last date for official support for the distro. The news was announced via an official blog post by Lee Ward, project communications leader. He wrote, No new features will be implemented. The only exceptions are those features which are already being worked on. We will continue to provide bug and security fixes until the last day of support, however. In addition to this, the move of the Linux world to systemd has caused a problem for Fuduntu as it has become a required thing for many programs, but we do not use it. Together with the GTK issue, Fuduntu has reached an impasse. To move forward would take quite a bit of time and manpower, neither of which can be supported. The team discussed several options and, ultimately, voted to endof-life Fuduntu Linux. This decision was not made lightly but, ultimately, it was the best option. In its current state, Fuduntu would be broken if we tried to move forward.

The same Apple MacBook Pro hardware was used for testing both the operating systems with Intel integrated graphics. The Apple MacBook Pro used for benchmark testing was powered by an Intel Core i5 520M processor, 4 GB RAM, 120 GB OCZ Agility 2 SSD, and Intel Ironlake IGP graphics. The fact that Ubuntu 13.04 performs better than Apple OS X on devices like Apple MacBook Pro is clearly an indication that the open-source Linux based operating system has enough juice to out-run the best in the business.

Google Play brings the Movies to India

Google Play has a piece of good news for movie buffs in India. After rolling out the Google Play Books store, the search-engine giant has launched yet another Google Play productGoogle Play Movies. This allows users to rent and download English as well as Hindi movies in the country via Google Play. Besides renting movies, one can also purchase them directly from the store. In terms of pricing, movie renting starts around Rs 75 for SD quality and Rs 120 for HD. And to purchase a movie, the lowest prices are around Rs 180 for SD and Rs 490 for HD. Currently, it appears that there are more English than Hindi movies. It is expected that Google India will be in talks with Indian production houses to get the required rights to make more Hindi movies available on Google Play. With the launch of both e-books and movies on Google Play, the only other products Indian Android users cannot access yet are music, magazines and TV shows.


Richard Stallman calls Ubuntu Linux 'spyware'

Richard Stallman, the author of the GNU General Public License (GPL), as well as the founder and president of the Free Software Foundation, has told a South American free software association, FLISOL, not to promote Ubuntu Linux at its events. He reasoned it out by saying that Ubuntu Linux "spies on its users" by collating users' desktop search activity and selling it to Amazon. This came up because Canonical had released Ubuntu 12.10 last year with Amazon search integrated into its Dash desktop search function. While Canonical says that users are free to opt out of Richard Stallman founder and president, this feature, and that it also makes users' search information Free Software anonymous before sending it to Amazon, the fact that Foundation Ubuntu users are shown Amazon ads in response to desktop search queries is something that has not gone down too well with many.

May 2013 | 23

The event organiser, however, denied Stallman's request by saying that it doesnt want to restrict users freedom of choice. Stallman, evidently, got miffed by the response and sent an e-mail to all those on the mailing list of the organisation.

Balakrishnan to head virtualisation in Red Hat

Red Hat has announced that it has appointed a new head for its virtualisation technologies and solutions. Radhesh Balakrishnan will serve as the global leader of Red Hats virtualisation infrastructure solutions, including responsibilities for the companys Red Hat OpenStack and Red Hat Enterprise virtualisation technologies and solutions. Balakrishnan joins Red Hat from Microsoft, where he held product management and marketing responsibilities across a wide range of product lines. Radhesh and his team will continue to drive open virtualisation forward through our Red Hat Enterprise virtualisation and Red Hat OpenStack technologies and solutions both of which are central to Red Hats open hybrid approach to the cloud. We welcome Radhesh to the Red Hat team, said Paul Cormier, president, Products and Technologies, Red Hat.

Three new Linux certifications on a platter!

With Linux growing at a fast pace, the demand for Linux certifications is also on the rise. PaulPaulito, a leading provider of online IT training courses focused on open source technologies, addresses this demand. The training provider has announced the availability of three new Linux certificationsLinux Basics, Linux Technician 1 and Linux Technician 2, which will be available from May. The certifications are being offered at an affordable price, claims the company. The Linux Basics certification is intended for the academic sector, the youth and others new to the world of Linux and open source technologies. It is also useful for IT professionals looking to earn an entry-level Linux operating system certification. The Linux Technician 1 and Linux Technician 2 certifications are suited for IT professionals and for those looking to earn a first-class Linux certification

Sailfish OS SDK released for Linux

Jolla, the developer of the Sailfish OS, has released the first SDK for the operating system. The launch timeframe for the SDK was delayed, well past its original schedule. Jolla has apparently created the SDK for developers using 32-bit and 64-bit Linux, for Windows and for Mac users. The company says the SDK has been designed with the most recent consumer needs in mind, enabling "effortless multi-tasking" bringing "usability and speed-of-use to a totally new level, unseen in the mobile industry.

Adobe has announced the launch of Adobe Blank, which is an open source font that cannot be seen. The creator, Ken Lunde, mentioned in an official blog post that the need for a special purpose font came up when a few development teams asked the Adobe Type Team to produce a font that had all Unicode code points covered, and where all code points are rendered using a non-spacing and non-marking glyph. The team soon realised that AdobeIdentity-0 ROS was the right platform for developing such a font. The two major purposes of this font, according to developer Joel Brandt, are: Invoking this font, as a temporary measure, prevents OS- or application-level font-fallback from kicking in before the intended font can be rendered. Related to this, using the font allows one to detect when a Web font is actually loaded, which is arguably a hack to overcome a limitation in CSS. The second purpose is to include Adobe Blank as a data URI in the CSS file. One example would be an element that is absolutely positioned off-screen. Check the width of the DOM element. If its zero, SomeWebFont hasnt loaded yet. If its greater than zero, it has. An example of a real-world use for Adobe Blank is the Adobe Edge Web Fonts extension for Brackets, which is available on GitHub. Those who are interested can download the font from SourceForge and it will soon be on GitHub too.

An open source font that you cant see!

An Android-based OS from China

It seems that the mobile OS war is all set to heat up again. As speculations are rife over the arrival of Mozilla's Firefox mobile platform, Jolla's Sailfish OS and even Samsung's own Tizen OS for smartphones, here is another mobile OS, Smartisan from China, which is generating a lot of buzz these days. The brain behind the OS is Luo Yonghao, who is a self-taught ex-English teacher.

24 | May 2013

According to reports, the platform has been launched in China, where the demand for big-display affordable smartphones from Meizu, Xiaomi and UMI is higher than the demand for Apple iPhones. The Smartisan mobile OS has been built to offer a more human interface, which can be seen in the icons used, which clearly suggest that the traditional square icon style and grid layout have been discarded. But the company has reiterated that it believes in Apple's closed system set-up, and that it is capable of maintaining the quality control of the platform. By the looks of it, it seems Smartisan will adopt a similar set-up.

RHCE / RHCVA / RHCSS Exam Centre

Firefox 20 comes with improved private browsing

The latest news is that Mozilla has fixed three critical security flaws in Firefox 20 and also released a number of new features with the latest update. The major new improvements include per-window private browsing, a new download manager and the ability to close hanging plug-ins separately. While users could earlier carry out private browsing with previous versions, they had to open a separate window to do so. However, with Firefox 20, users can browse privately without closing or changing their current browsing session. The new update also allows desktop users a new Safari-like download manager, which pops out from the toolbar. With this, users can monitor, view and locate downloaded files without switching to any other window. Coming to the three flaws that have been fixed, Mozilla's security advisories page has listed the following: a bypass of System Only Wrappers that can allow protected nodes to be cloned, a WebGL flaw which only affects Linux users using Intel Mesa graphics drivers, and a range of memory safety hazards, reports ZDNet. The update is available for Windows, Linux, Mac and Android, and users can download it now from the official site. The Android version of the update is available through the Google Play Store.

At ADVANTAGE PRO, we do not make tall claims but produce 99% results month after month TAMIL NADU'S NO. #1 PERFORMING REDHAT PARTNER
Only @ Advantage Pro

Redhat Career Program from THE EXPERT

Also get expert training on My SQL-CMDBA, My SQLCMDEV, PHP, Perl, Python, Ruby, Ajax...
May month RHCE exam dates are 6th, 20th & 27th May 2013 May month RHCVA exam date 27th May 2013 May month RHCSS-333 exam date 21st May 2013

Google brings the QuickOffice app to Android

After releasing the Office suite app, QuickOffice, for iPad users earlier, Google has now released the same app for Android and iPhone users. Quickoffice, prior to Google buying the app, was recognised as a third-party app; so this is the first version to be launched after the buy-out. Google has stated that QuickOffice is basically meant for business users, who still prefer the Office suite on their PCs, instead of more advanced Google Docs. With QuickOffice, users can create new Word files or edit them, make PowerPoint presentations or create Excel sheets on-the-move and save them all in Google Drive. Apart from basic Office features, the search-engine giant has enabled easy access to files from within the app itself. Users have to sign in with their Google Apps for the Business account and then they will be able to see files

Do Not Wait! Be a Part of the Winning Team

Regd. Off: Wing 1 & 2, IV Floor, Jhaver Plaza, 1A, N.H. Road, Nungambakkam, Chennai - 34. Ph : 98409 82185 / 84 Telefax : 28263527 Email :
May 2013 | 25

under Shared with Me, Starred, Recent and in any other sub-folder in Drive. Google's QuickOffice app for Android can now be downloaded on Google Play.

VMware announces new campus in Bengaluru

Here is some good news for India! VMware Inc, the global leader in virtualisation and cloud infrastructure, has announced a multi-year investment of $120 million that includes a long term lease for a new 39,019 sq m (420,000 sq ft) building in South Bengaluru, currently under construction. VMware's India-based R&D and support operations are second in size and scale only to those at the company's headquarters located in Palo Alto, California, U.S. Its existing facilities in Bengaluru will be consolidated into the new state-ofthe-art premises, which will seat 2,700 employees when ready next year. The campus will accommodate new and ongoing product R&D, as well as a large staff supporting VMware's global operations and India's sales teams. VMware's R&D operations in India make a significant contribution to the company's portfolio of virtualisation and cloud computing products, all designed to help VMware customers navigate the journey to a new era of IT.

You can now access Gmail in six Indian languages!

It seems Google has big plans to lure Indian consumers. After recently launching the Google Nexus 7 tablet on Google Play in India and introducing other Google Play products like eBooks and Movies, the search-engine giant is now offering features like additional language support on Gmail for users who are still comfortable with using non-Android-based feature phones. The untapped Internet user base still constitutes a large percentage of prospective Internet users and by providing additional language support other than English and Hindi, Google is eyeing the mass users who still switch on their network via a Java-based feature phone and who prefer to communicate in their native languages. Apart from the Gmail language feature, the search-engine giant has also released an offline-mode feature for its Google Translate app. The new feature offers offline translations in over 50 languages. This comes handy for all the frequent travellers, who are not always prone to being active on their data packs. To use the offline-mode translation feature, users have to select Offline Languages from the app menu.

Heres a new design for the Twitter For Android app

Acquia open sources iOS Drupal publishing app

This is major news for all Drupal lovers. Acquia, the commercial provider of Drupal Web CMS services, has released the code base for building native iOS apps that can post content straight to a Drupal website the first of its kind. The Drupal-services company has posted the code for Drupal Create on GitHub with a BSD-like licence. The Drupal Create code served as a base for the iOS app, Acquia, released for publishing content on its hosted Drupal Gardens service. With this release, Acquia is kick-starting the development of mobile content publishing apps for Drupal, Drupal co-creator and Acquia co-founder, Dries Buytaert, said in a statement.

Twitter, the popular micro-blogging site, has received its latest Android app update that displays tweets in a whole new wider and taller format. The updated version offers a native Android experience to users. There is better content viewing in tweets and the navigation bar at the top has been made a lot flatter compared to the previous versions. The Twitter for Android app has been in dire need of enhancement, and this timely update brings a refreshed and more reader-friendly look to the Twitter page. "The app now brings wider and taller timelines that fill the screen, a flat navigation bar, tap and hold for quick actions, and more," the company stated. Apart from the content feature, the update enables users to swipe across the screen to change tabs and auto-complete suggestions for usernames and hashtags. Also, the next time you plan to start a conversation with a username and hashtag, Twitter will offer the friends to be included option while composing the tweet.

Whats new in GNOME 3.8?

The developers of the GNOME project have announced the release of the latest version, GNOME 3.8. Its new features include a new Clocks application, enhanced search functionality, new privacy settings and a number of design changes throughout the desktop environment. The redesigning will now aid in finding applications faster.

26 | May 2013

Also, the Settings application has gained four new configuration panels. According to reports, a few tweaks have also been done to the GNOME's file manager, the Web browser and the Contacts and Documents applications. GNOME 3.8 is the first major version of the desktop since GNOME 3.6 was released last year in September. This version has also done away with Fallback Mode for systems without hardware accelerated graphics, and has instead introduced a new Classic Mode; where hardware accelerated graphics are not available, GNOME now uses llvmpipe.

Novell advances mobility portfolio with two additions to Novell ZENworks Product Suite

An open source alternative to Google Docs

If you are on the look out for document/office software like Google Docs, Microsoft Office and LibreOffice, you're in luck. OX Document, from the stable of Open Xchange, which is known for setting up a Microsoft Exchange alternative, is about to arrive. According to reports, OX Document is still in its initial development phase and the company has released a demo-version of OX Text, its word processor. OX Text will be capable of editing content from Microsoft Word, OpenOffice and LibreOffice. The company's CEO has also claimed that OX Text can retain the entire Word document content and edit about 80 per cent of a Word file's content. Apart from the convenience of editing anywhere and everything, OX Text will also enable multiple users to view and edit documents simultaneously.

Linux users have now been offered the second version of the Video Disk Recorder (VDR) released by author Klaus Schmidinger. The new video software comes with full HDTV support. According to a report, the images from TV are generally processed either by a plugin using the graphics card or by a DVB card's hardware decoder. The recorder's HD image rendering function is currently supported by TechnoTrend's S2 6400 DVB card only. The HDTV enhancement has made Schmidinger change the recording format from PES (Packetised Elementary Stream) to TS (Transport Stream); however, the new VDR can still play back old recordings as well. VDR 2.0 supports Satellite Channel Routing (SCR), which enables multiple receivers to be operated through one cable. The user interface has also been revamped. The on-screen display (OSD) offers full-screen HD and TrueColor support, which aids the rendering of images more efficiently. The new LCARS standard skin is said to be a rendition of the control panels from the Star Trek franchise. The latest version needs DVB drivers that support at least version 5.3 of the DVB API, which means version 3.0 or newer of the Linux kernel or the external DVB drivers from The VDR 2.0 comes with additional plug-ins to add more functions to the recorder.

VDR Linux video recorder version 2 released

Novell has announced the release of ZENworks Application Virtualisation 10 and ZENworks Mobile Management (ZMM) 2.7, adding to Novells mobility portfolio and meeting the productivity and security needs of todays device-saturated businesses. Both releases further equip organisations with the resources and technology to manage growing enterprise application needs, as well as manage employees needs when working from multiple devices. Todays business environment demands IT staff to stay one step ahead of the mobile workforce. These latest releases help accomplish this by addressing pressing mobility trends such as Bring Your Own Device (BYOD), as well as traditional enterprise pain points like application management. ZENworks Application Virtualisation 10 allows users to get virtual versions of their Windows applications where they need them. Through the click-andgo convenience of secure Web portals, users get exactly what they need, when they need it, from any Windows laptop or desktop, regardless of location. This enables IT departments to ensure that users are authorised to access the appropriate applications; they can track consumption of virtualised applications to help organisations ensure they optimise their acquisition of software licences. ZENworks Mobile Management 2.7 introduces the ability to assign policies to entire Lightweight Directory Access Protocol (LDAP) groups simultaneously, eliminating the tedious and time-consuming task of applying a new policy to each, individually.

May 2013 | 27

Open SOurce

Guarantees Positive ChanGe

The founder of an open source tricorder aims to make an accessible device that everybody can use to explore their environment, understand how things work, assess the levels of pollution, lighting, etc, and take positive steps to improve their area. Then, there is an open source underwater robot developed by a NASA engineer, which can be used by teachers to make studies on the ocean more interesting. Or how about an open source oximeter that can enable the setting up of a low-cost network to monitor blood oxygen levels of patients in a hospital? Or a pulsating pendant that simply makes you feel good? When we look at the purpose of many open source projects, their common aim seems to be to create some positive change in terms of accessibility, discovery, learning, cost or innovation. And that is what makes open source so special!

A Bluetooth oximeter Designed by Dimitri Albino, the smARtPULSE oximeter helps monitor blood oxygen levels, using standard photodetection technology. A sensor is placed on a thin part of the patient's body, say a fingertip, and light of two different wavelengths is passed through the patient to a photodetector. The changing absorbance at each of the wavelengths is measured, allowing the device to determine the absorbance due to the pulsing arterial blood alone, excluding vein blood, skin, bone, muscle, fat, etc. With near-infrared spectroscopy, it is possible to measure both oxyhaemoglobin and deoxygenated haemoglobin on a peripheral scale. The oximeter can connect to other devices using low-power Bluetooth 4.0, and people can use it out-of-the-box with the free app that runs on iOS and Android. The open twist: What makes the smARtPULSE special is the complete open source nature of the project and the resulting ability to develop and manufacture the product at low cost, offer the best applications for tracking and analysing the readings, and ease of connecting to any computer running Linux, Windows or OS X. The team is working on a great application programming interface (API) and there will even be libraries available for Arduino, Raspberry Pi and Electric Imp. The special algorithm, hardware, protocol and API are all open source, so anyone can develop projects and
28 | May 2013

ideas based on the data received from smARtPULSE. The team believes that it will help engineers connect the oximeter to the Internet of Things (Kevin Ashtons 1999-debuted concept of all objects being directly linked to the Internet via bar-codes or RFID, and without humans playing a role)consider some examples like remote monitoring of health status and a low-cost network of oximeters for hospitals.

Tricorders that help observe, learn and discover Remember the tricorders used in Star Trek? A tricorder is a multi-function, handheld device used for sensor scanning, data analysis and recording data. With advancements in embedded systems and sensors, tricorders are becoming more of a reality. In fact, there are even contests such as Qualcomms X-Prize that encourage engineers to make feature-packed tricorders. One such tricorder enthusiast, Dr Peter Jansen, has come up with a series of open source science tricorders code-named Mark. The tricorders provide a profound understanding of our environment by capturing varied information like temperature, humidity, atmospheric pressure, ambient light, distance and even magnetic fields. In his website, Dr Jansen speaks of how a tricorder can bring about positive change by offering a variety of information: It's possible that the same instrument that can show a child how much chlorophyll a leaf has could also show how much pollution is in the air around us, or thats given off by ones car If people could easily discover how their lifestyles impact the environment, they would then make positive lifestyle choices to reduce emissions, for instance. By having access to general tools, people can learn about leaves, air, clouds,

or houses or even light, magnetism, temperature or anything the tricorder can help them see.


The open twist: Dr Jansens tricorder project is completely open source. People can link it up with other systems and applications to access much more information, do deeper analysis, and use it for learning, experimentation or product development. The latest complete version is the Science Tricorder Mark 2. It runs Linux, and has a number of connectivity and development options. It comprises a suite of atmospheric, electromagnetic and spatial sensors. The upgradeable and self-contained sensor boards have separate processors and use an open sensor protocol. Other tech specifications include: Atmel AT91RM9200 (ARM920T 32-bit RISC core/ 180 MHz) processor, dual 7.11 cm (2.8") OLED touch displays, Debian Linux, USB host/device and SD card storage. While Mark 3 was abandoned mid-way, Mark 4 is now getting ready it has been fabricated already and software development is in progress. The goal for the Mark 4 is to bring down the cost and make it more accessible to all.

projects. There are a lot of commercial products (especially headsets) available these days, but not much in terms of design, code, etc. In an effort to bridge this gap, the University of Southern Californias MxR Lab has launched a website that showcases headsets and modifications that DIY enthusiasts can build, including open source code for the devices and full-body motion control integration through Kinect for Windows or OpenNI. The open twist: The website at present features two major headset designs and a developer kit, plus a lot of middleware, scripts, code and modifications. The Socket HMD is a complete 1280 x 800 headset with a 3D-printed shell and custom-assembled electronics. The VR2GO viewer is a simpler project, which uses iPhones or iPod touch players as the eye pieces, and mods from the Oculus Rift developer kit for stereo cameras. You may tweak the designs and come up with a viewer that best suits your need. However, you will need access to a 3D printer to get it printed!

Small and handy submarines, for the explorer in you

Do-it-yourself virtual reality projects

There is a huge gap between the excitement generated around virtual reality and the actual resources available for those who want to use the technology in various

OpenROV is a project that aims to open up the ocean to not just the adventurous but also hobbyists and educators. The project is led by NASA engineer Eric Stackpole, and promotes an open source underwater DIY robot. The underwater robot is capable of sinking to depths of up to 100 metres (although it has been tested only for 25m till date), runs on eight C-cell batteries for approximately 1.5 hours and has a speed of 1 m/s. The structure is made of a sealed cylinder within a laser-cut acrylic frame. The cylinder houses a BeagleBone, high-definition (HD) webcam and LED lights. It is tethered to a laptop on the shore by a single twisted pair of cables communicating 10 megabit Ethernet data for control and video. The robot can be piloted using a Web browser and video feed. The open twist: You could either buy a complete kit from the website to build an OpenROV, or use the designs and list of materials needed to source whatever you need from a local electronics store. The design is organised into various sections, most of which can be readied within
May 2013 | 29


an hour. With each assembly section, there is a Bill Of Materials (listing the parts and tools you will need), general notes for the assembly (what the best working space is, how long it should take, what each part is for, etc), the instruction set itself, and notes on what to do if something goes wrong. Since the project is open source, improvements and features are continually added by builders who have found better solutions, so you should keep checking the website for updates. You are encouraged to contribute to the core system (body, electronics or code) or build extensions (payloads) that plug into the ROV.

A smart robotics controller

MB DDR2 and a 2 GB micro SD for firmware storage. There is a field-programmable gate array (FPGA) co-processor that enables hard real-time control extensions and advanced image-processing extensions. The native development environment for Kovan is based upon OpenEmbedded, the system that is used by BeagleBone. The package comes with C and Python support out-of-the-box, so you can get started right away with development, without any need to set up cross-compilers. The Gerber files, schematics, source code, firmware, etc, are all available on the website; hence, it is a ready-to-use package. Page#Software

An electronic necklace for the techie woman

Kovan is a smart robotics controller offered by open source hardware company Sutajio Ko-Usagi. It is targeted at applications that require fully autonomous operation and high levels of integration. Kovan integrates onto a single PCB all the features you need to build a self-guided robot. It has high actuator capability supporting four 1.2A H-bridge motor drivers and four servo drivers. It can integrate with a range of sensors via eight 10-bit analog inputs (5V/3.3V selectable range), eight digital I/O (5V/3.3V selectable levels), rapid prototyping headers and a built-in 3-axis accelerometer. Kovan also offers several connectivity options including 802.11b/g Wi-Fi and USB 2.0. Its user interface includes an LCD + touch screen connector (it natively supports an 8.89 cm (3.5") screen), mono audio output, and pushbutton and status LED indicators. It includes an integrated 2-cell Li-ion battery charger. The open twist: At the heart of Kovan is Linux 2.6.34 running on an 800 MHz Marvell ARM w, 128
30 | May 2013

Although just an accessory, Adafruits iNecklace is an example of how electronics is entering our life in so many strange yet interesting ways. Made for women who celebrate art, science and technology, the pulsating necklace features a pulsing similar to the breathing LED pattern on many laptop and computer systems. The default pattern is reverse-engineered from Apples breathing LED on the Mac, MacBook and iMac. The open twist: The iNecklace is CNC machined from fine aluminium, with a screw-in backing. The pendant contains a circuit board with pulsating LED and battery. A single battery can keep the device pulsing for over 72 hours. The iNecklace hardware is completely open source, so anybody can create their own accessory or gadget using the reverse-engineered pulsing technology. The source code, circuit board files, schematics and CAD files are posted on GitHub (

By: Janani Gopalakrishnan Vikram

The author is a technically-qualified freelance writer, editor and hands-on mom based in Chennai.

Classifieds for Linux & Open Source IT Training Institutes
Linux Training & Certification Courses Offered: RHCSA, RHCE, RHCVA, RHCSS, NCLA, NCLP , Linux Basics, Shell Scripting, (Coming soon) MySQL Address (HQ): 104B Instant Plaza, Behind Nagrik Stores, Near Ashok Cinema, Thane Station West - 400601, Maharashtra, India Contact Person: Ms. Swati Farde Contact No.: +91-22-25379116/ +91-9869502832 Email: Website:

*astTECS Academy Courses Offered: Basic Asterisk Course, Advanced Asterisk Course, Free PBX Course, Vici Dial Administration Course Address (HQ): 1176, 12th B Main, HAL 2nd Stage, Indiranagar, Bangalore - 560008, India Contact Person: Lt. Col. Shaju N. T. Contact No.: +91-9611192237 Email: Website:, Advantage Pro Courses Offered: RHCSS, RHCVA, RHCE, PHP , Perl, Python, Ruby, Ajax, A prominent player in Open Source Technology Address (HQ): 1 & 2 , 4th Floor, Jhaver Plaza, 1A Nungambakkam High Road, Chennai - 600 034, India Contact Person: Ms. Rema Contact No.: +91-9840982185 Email: Website(s): Duestor Technologies. Courses Offered: Solaris, AIX, RHEL, HP UX, SAN Administration(Netapp, EMC, HDS, HP), Virtualisation(VMWare, Citrix, OVM),Cloud Computing, Enterprise Middleware. Address (H.Q.): 2-88, 1st floor, Sai Nagar Colony, Chaitanyapuri, Hyderabad - 060 Contact Person: Mr. Amit Contact Number(s): +91-9030450039, +91-9030450397. E-mail id(s): Websit(es):

Contact Person: Benila Mendus Contact No.: +91-9447294635 Email: Branch(es): Kochi, Kozhikode, Thrissur, Trivandrum Website: Linux Learning Centre Courses Offered: Linux OS Admin & Security Courses for Migration, Courses for Developers, RHCE, RHCVA, RHCSS, NCLP Address (HQ): 635, 6th Main Road, Hanumanthnagar, Bangalore - 560 019, India Contact Person: Mr. Ramesh Kumar Contact No.: +91-80-22428538, 26780762, 65680048 / +91-9845057731, 9449857731 Email: Branch(es): Bangalore Website: Veda Solutions Courses Offered: Linux Programming and Device Drivers, Linux Debugging Expert, Adv. C Programming Address (HQ): 301, Prashanthi Ram Towers, Sarathi Studio Lane, Ameerpet, Hyderabad - 500 073, India Contact Person: Mr. Sajith Contact No.: +91-40-66100265 / +91-9885808505 Email: Website:

GRRASLinuxTrainingandDevelopmentCenter Courses Offered: RHCE,RHCSS,RHCVA, CCNA,PHP ,ShellScripting(onlinetraining isalsoavailable) Address (HQ): GRRASLinuxTrainingand DevelopmentCenter,219,HimmatNagar, BehindKiranSweets,GopalpuraTurn, TonkRoad,Jaipur,Rajasthan,India Contact Person: Mr.AkhileshJain Contact No.: +91-141-3136868/ +91-9983340133,9785598711,9887789124 Email: Branch(es): Nagpur,Pune Website(s):, Network NUTS Courses Offered: RHCE, RHCVA, RHCSS, RHCA, CCNA, Linux Clustering, VMware Vsphere ICM Address (H.Q.): A-184, Sukhdev Market, Bhism Pitamaha Road, Kotla Mubarakpur, New Delhi. Contact Person: Ms.Shivani Sharma Contact No.: +91-9310024501, 91-11-46526980/81/82 Email: Branch(es): Preet Vihar (Ms.Kavita Kashyap [M:+91-9310024502]) Website(s):

Eastern Region
Academy of Engineering and Management (AEM) Courses Offered: RHCE, RHCVA, RHCSS, Clustering & Storage, Advanced Linux, Shell Scripting, CCNA, MCITP , A+, N+ Address (HQ): North Kolkata, 2/80 Dumdum Road, Near Dumdum Metro Station, 1st & 2nd Floor, Kolkata - 700074 Contact Person: Mr. Tuhin Sinha Contact No.: +91-9830075018, 9830051236 Email: Branch(es): North & South Kolkata Website:

IPSR Solutions Ltd. Courses Offered: RHCE, RHCVA, RHCSS, RHCDS, RHCA, Produced Highest number of Red Hat professionals in the world Address (HQ): Merchant's Association Building, M.L. Road, Kottayam - 686001, Kerala, India


Let's Try

This article shows you how to make money and have fun with some code, Python and the Internet. It all starts with a four letter acronym, WSGI a spec that makes it very easy to write applications for the Internet.

A Primer on

he WSGI spec is probably the simplest youll ever come across. It defines exactly one call interface that your application must implement in order to be a WSGI application. What's more, it doesnt have to be a functionit can be any callable. The basic skeleton of a WSGI application is as follows:
def wsgi_application(environ, start_response): start_response(200 OK, [(Content-Type, text/ plain)] return [Hello World]

(read real-life) applications as a single function; so you can also write a WSGI application like the one shown below:
class WSGIApplication(object): def __call__(self, environ, start_response): start_response(200 OK, [(Content-Type, text/ plain)] return [Hello World]

Thats your app. It doesnt do much. It needs all of three lines of code to implement a WSGI application. Of course, it can be quite a headache writing larger

By implementing a __call__() method in a class, you can effectively turn an object into a function (actually, everything in Python is an objecteven functions). The class needs to be instantiated into an object, but that object can now behave like a function too.

32 | May 2013

The arguments
Let's Try


The callable takes two argumentsenviron and start_ response(). The first one, environ, is a dictionary that holds the environment, which is basically a CGI environment with a few WSGI-specific keys added. The canonical way to parse a WSGI environ is to import the cgi module and use its FieldStorage class, as follows:
import cgi # Well make a copy of the environ and strip out its query # string, because apparently its bad for FieldStorage post_env = environ.copy() post_env[QUERY_STRING] = # And now well parse it post = cgi.FieldStorage( fp = environ[wsgi.input], environ = post_env, keep_blank_values = True )

The more interesting argument is the second one start_ response() which is a callable. Start_response() is used to set the status code and send out the headers. But why cant that be done as part of returning the content? Well, its an ugly hack, but a very clever one that is immensely useful and no one has suggested a better way yet. One of the aims of WSGI is to make the entire thing very loosely coupled, infinitely pluggable and layerable. With this kind of flexibility, one might think the spec would require a huge API. But the Python developers have managed to pack in this extreme flexibility with just the one call interface. The start_response() call tells your server what response your app wants to send to the browser (or client). But the neat trick is, the status (and headers) are sent only after the app has returned, or has yielded at least one element to send to the client. This gives ample room to set up a 202 Accepted response

and go on processing a particularly expensive operation, and change it to a 200 OK or a 400 series error code, depending on the outcome of the operation just before returning the data. But where this hack really shines is in WSGI middleware. Middleware have some of the most interesting types of WSGI apps. These WSGI apps call other WSGI apps. Middleware can add elements to the request, or remove elements that are meant for the middleware to consume. As the response flows back from your app to the middleware and then on to the client, the middleware can analyse and change the response. Middleware solves pretty generic issues, like HTTP authentication, URL-based routing, sub domain-based routing, XSRF protection, and the like. Middleware can be stacked infinitely. Because they are WSGI apps and look like WSGI compliant servers to the upstream apps, as many WSGI middleware as are needed can be chained together. This is where start_response() really shines. The first middleware calls start_response() with 202 Accepted and calls the next one in the chain to handle the request. Suddenly, one middleware pretty high up in the chain decides to throw a 500 Internal Server Error. As this response travels back down the middleware chain, another middleware picks this up, turns it into a 503 Service Unavailable, and returns a pretty looking 503 page with the webmasters e-mail.

The return

WSGIs return is even more interesting. An app (the callable) needs to return an iterable. The idea is that the app can return one chunk of data as and when it becomes available. The specification actually states that the headers are first sent out when the first non-empty string is yielded, so an error state can be achieved pretty late into the handling cycle. The only catch is that no nonempty strings should have been returned before changing a 200 OK into a 500 Internal Server Error. Theres an excellent article on the Internet about the benefits of WSGIthe link is provided in the resource box at [1]. It should definitely be read before attempting to write your first WSGI application. It gives you a feel of what the developers were trying to achieve, and what you should and shouldnt do with the spec.


TechnoMail - Enterprise Email Server Anti SPAM, Anti Virus, Email Content Filtering Firewall, Internet Access Control Content Filtering, Site Blocking Bandwidth Management System Managed Email Hosting Solutions

1, Vikas Permises, 11 Bank Street, Fort Mumbai, India-400 001, Mobile: 09167399917. Email:

May 2013 | 33


Structuring big WSGI applications

Let's Try

The WSGI spec is quite flexible as there is only one entry point into your application. You can, therefore, write the app as a single function, a class implementing the __call__() method, or even as a complete module. When you get around to writing bigger WSGI applications (and this applies not only to pure WSGI applications, but also to micro frameworks that expose the application as a WSGI callable including Flask, Pylons, WebOb, and even Tornado in WSGI mode), youll generally want to write them as a module. There are a couple of reasons for this. First, its always a good idea to write self-contained code. A well written WSGI application can be distributed using distribute, along with a requirements.txt file specifying dependencies for pip to download. In fact, because its so easy to get WSGI apps on the Internet for free (using Google App Engine or Heroku), and because these services generally mandate writing system independent modular code, it is a good idea to write code that can be run out-of-the-box on these services. Second, wouldnt it be awesome to do the following:
from MySuperAwesomeWSGIApp import app # # # # # Heres its important to note exactly what app is app is your WSGI callable. Its definition and initialisation resides in Yes, initialisation. The app should be fully init-ed and ready to be served.

While it's perfectly acceptable to make a WSGI app face the Web directly using mod_wsgi, it isnt exactly advisable for performance and security considerations. Security wise, its a bad idea to run the Python interpreter inside your Web servers process. You will generally want to serve your apps with a pure-Python server and put a nginx reverse-proxy in front. With that in mind, lets examine a few WSGI servers and look at how to use them. First is Pythons own wsgiref. This is the reference WSGI server implementation, which comes as a part of the Python standard library. It is thread-pooled, reasonably fast, and great for testing, development, or even running as a production server on your intranet (although for a particularly busy intranet that might be pushing it a bit). Use it as follows:

from wsgiref.simple_server import make_server from MyWSGIApp import app httpd = make_server('', 5000, app) httpd.serve_forever()

When you are ready to move up to the production level (i.e., on the real Internet), you will want to use a more robust server. There are a few options to choose from. You can go threadpooled, event-based, or use the current rage in Pythonapplication level micro-threads that are cooperatively scheduled. Python calls them greenlets, and they are great for implementing asynchronous TCP servers, such as the one which comes with gevent. Gevent can be installed from the repos or from pip, and the server set up is as follows:
from gevent.wsgi import WSGIServer from MyWSGIApp import app httpd = WSGIServer(('', 5000), app) httpd.serve_forever()

import OneMiddleware import AnotherMiddleware import ThirdMiddleware app = OneMiddleware(app) app = AnotherMiddleware(app) app = ThirdMiddleware(app) # Now hook up the app to the server - more on that later

Because WSGI apps can be called by another WSGI app (as is evident in how it is used in middleware), you can hook up your WSGI app (which youve thoughtfully written as a module) to other WSGI apps. But, to which ones? Well, there are URL-based routers, sub domain-based routers and authentication filters that are widely available online. You will find links to a few in the resource section.

Serving WSGI applications

Most people try serving WSGI applications using Apache and mod_wsgi. They fail miserably and complain that Python is slow or that WSGi is dead technology. What they dont realise is that theyre serving it wrong.

Note that the host-port pair is passed as a tuple. Gevents greenlet-based server is currently regarded as the fastest Python implementation of the WSGI server, and because the server itself is written in C (its actually written using libev), it really is quite fast. A different way of serving up a WSGI application is to use Green Unicorn, which is actually ported over from Rubys Unicorn project. Its a pre-forking threadpooled server, but it can make use of gevent and greenlets. To use Green Unicorn, the gunicorn package is installed from pip or your package manager (ensure you install gevent because that is an optional dependency). Serving the application is a little different howeverGreen Unicorn is a command line application, so you need to use the following command:
$: gunicorn -k gevent -w 4 -b MyWSGIApp:app

34 | May 2013

The -k gevent bit instructs gunicorn to use gevent and its greenlets, and the -w 4 tells it to use four worker threads (you should generally use no more than n+1 threads, where n is the number of cores your processor has - which is somewhat like the number of threads you tell make to spawn with the -j option). The -b option supplies the host-port pair (this can be omitted) and, finally, you have the application itselfthe colon notation basically says from MyWSGIApp import app and use that. You should put a reverse-proxy in front of the application something like the nginx config below will do just fine:
server { listen 80; server_name _; location / { proxy_pass proxy_redirect proxy_set_header proxy_set_header proxy_set_header forwarded_for; } }
Let's Try


my own buildpack. Remember that because Herokus Procfile expects a single command to start the server, and because there are no reverseproxies between the app and the Web (save for Herokus routing mesh), your best bet is to use gunicorn.

WSGI applications on Google App Engine

Google App Engine defaults to using CGI for Python applications. To use WSGI, you must use the Python 2.7 runtime. Specify the following in your app.yaml file:
runtime: python27 api_version: 1 threadsafe: true; off; Host $host; X-Real-IP $remote_addr; X-Forwarded-For $proxy_add_x_

Its important to specify the thread safety of your app. If youre not sure, say false it doesnt hurt. Specify the handlers as shown below:
handlers: - url: /.* script:

End notes

Ideally, all your static files should be served by nginx.

WSGI applications on Heroku

Heroku is apparently the next big thing in cloud technology. Its new Cedar stack makes it very easy to write WSGI applications and run them on the Web. With Heroku, you start with a virtualenv and develop your app like you generally would. When youre ready to push the app into Heroku, in the root directory of the application, issue the following command:
(venv)$: pip freeze > requirements.txt

Because WSGI is so nifty, I generally stick to Web frameworks that expose their apps as WSGI callables. Flask is a great framework if you want to write WSGI applications. Pylons is a lot more involved. Django does have its own server but it can also act as a first-class WSGI app, so you are covered. Ill let you in on a secret. If you choose to use Green Unicorn and your app can handle static files, you dont even need to use a reverse-proxyGreen Unicorn is good enough to face the Web. Maybe in another article, Ill explore how to actually write a WSGI application with WebOb and, maybe, Flask. I wont touch on writing raw WSGI applications in this article. It all hinges on demand, so if you want them, do pester OSFY with mails. References
[1] WSGI and the Pluggable Pipe Dream - http://lucumr.pocoo. org/2011/7/27/the-pluggable-pipedream/ [2] Selector - A WSGI URL-based router - lukearno/selector/ [3] A bunch of useful WSGI libraries and middleware - http://www.

Heroku recognises a Python app by the requirements.txt file, and it specifies all the dependencies of your application. Optionally, you can write a runtime.txt in which you specify which Python interpreter to use. Your runtime.txt file should include the following line:

where x.y.z is the Python version. Supported versions are 2.7.3 and 3.3.0 (2.7.3 is the default runtime), but you can specify any publicly available version from 2.4.4. You can even use pypy-1.9 if you want to. I couldnt find a way to use Stackless Python or PyPy 2.0 beta without writing

By: Boudhayan Gupta

Describing himself as a 'retard by choice', the author believes that madness is a cure-all for whatever is wrong or right with society. A social media enthusiast, he can be reached at @BaloneyGeek on Twitter.

May 2013 | 35

Exploring Software

Guest Column

Anil Seth

Network Deployment and Alternative OSs

Booting over the network can be quite a challenge. However, once it is set up, it works like a charm.
A second major change Ubuntu has made is to use a Network Block Device, by default. You create a SquashFS image of the root directory. An NBD server makes it available on the server. The client connects to it using an NBD device, though it is not clear if an NBD server offers an advantage over NFSv4. The scripts have to keep evolving as distributions change the booting infrastructure, e.g., the replacement of mkinitrd by Dracut. The generation of a custom initramfs is a critical need for the diskless boot environment. LTSP is not supported on recent versions of Fedora, probably because of the switch to systemd from (init).

ooting over the network is very useful whether you are using servers with network files storage or diskless client workstations. The idea is very simple. An operating system is installed in a directory on a network server. The client machine boots from the file server using the PXE protocol and NFS root. The server uses DHCP and a NFS server to configure the client and point it to the appropriate root directory.

The Linux Terminal Server Project (LTSP)

In the late 1990s, the LTSP was created to enable low-end PCs to function as thin clients, especially for schools. The project created server scripts and a lightweight distribution for the client machines to make it easy for deployment at schools. An even greater benefit than being able to use obsolete PCs was the simplification of support. We found it far easier to manage over 60 clients with a server in various labs and offices than maintain even a single lab with networked PCs. In order to be able to fine tune the distribution for each server environment, the LTSP project now creates scripts that would build a client OS on the server. So, Ubuntubased clients would be created on Ubuntu, and SUSEbased clients on SUSE. A client machine boots and starts a custom display manager called LDM, which talks to the server, finds out the sessions available and starts an X session on the server. By default, all applications run on the server and the display are shown on the client. The project has added capabilities for running local applications.


What makes it hard?

The scripts can be non-trivial. The same root directory may be used by many clients. So, the root should be read-only; however, some files may need to be modified specifically for the client, and some directories and files need to be writeable for normal operations. The common solution was to mount the required files and directories on temps. However, the recent versions are now switching to a UnionFS.
36 | May 2013

On a 1 GBps network, a diskless workstation was logged in and ready to use in just under a minute. The additional time needed on a 100 MBps network was less than 15 seconds. The major difference in network speed is evident in how snappy a user's environment is after signing in. This is because while the X server is running on the client, all applications, including the window manager, are running on the server. There are a lot of network traffic and resultant latency issues. Modern desktop environments with desktop effects like Unity or the Gnome3 shell can be painfully unresponsive on the diskless environment. The fallback mode of Gnome or KDE with desktop effects disabled, runs pretty well on diskless environments. Media-rich applications may be impacted by the network delays. Hence, LTSP makes it easy to use local applications. You install local applications on the server in the change-root environment. For instance, you can install Firefox and Flash Player for an excellent browsing experience. The window manager and panel are running on the server. So, running a local application requires a special script ltsplocalapps. It makes very clever use of the xprop command of the X server to run the command on the local system. In case you want to experience the feel of a diskless session without setting up the diskless environment, you may do what I routinely do. You will need two machinesthe

Guest Column Exploring Software

workstation you use and a machine you will use as a server. 1. Create a password-less login environment for SSH. This is an optional convenience. 2. You will need to include your user in the audio group for reasonable behaviour of pulseaudio. 3. Create a file .xinitrc in your home directory as follows (using the KDE session):
start-pulseaudio-x11 # replace startkde by gnome-session for gnome desktop ssh -X anil@server ". .bash_profile; startkde"

4. In the .bash_profile on the server, add the following:


So, what will the future of LTSP be? LTSP is going to have to face the challenge of dealing with the current trend in desktops where the desktop effects are increasingly integrated into the window manager. In addition to the ability to run local applications, you can build a fat client with build-ltsp-client. This implies that the desktop manager and applications will run on the local diskless machine. LTSP discussions continue to be very active and are an indication of the long term prospects for this project. In a recent discussion, it was pointed out that the issue with desktop effects is really an issue with OpenGL over the network. There is an exciting project called VirtualGL which may save the situation. As software people, we should be thankful that every advance that breaks stuff brings us more opportunities for work and exploration!

By: Anil Seth

The author has earned the right to do what interests him. You can find him online at,, and reach him via email at

5. From a terminal window, run the command given below:

$ xinit -- :1

May 2013 | 37


Consistent Hashing With Memcached

Consistent Hashing is a method thats widely used to reduce cache invalidation. Lets take a closer look at how it can be used.

Let's Try

emcached is a popular in-memory, distributed keyvalue store that is frequently used as a caching layer (especially for websites). It was developed in 2003 by Brad Fitzpatrick for hosting his website LiveJournal. Since then it has become extremely popular and is being used in Facebook, Zynga and Wikipedia.

Distributing keys and values

Memcached is a distributed key-value store, which means that it distributes the key-value pairs across multiple cache instances. Consistent hashing is a method of distributing data across multiple cache instances such that an addition or a removal of a node causes less disruption in the cache hits. The way Memcached distributes the key-values is pretty simple, if there are multiple Memcached instances: 1. For a given key, the client creates a hash (hash (key)) and then maps it to a particular hash instance using the

modulo operation - hash (key) % number of instances. 2. The client stores the value in the instance that matches the result of the above operation. Simple enough, right? But let us say that we have reached a stage where the existing instances of the cache have outgrown the amount of data they can cache for instance, if your subscribers have grown 10-fold and the number of hits has gone up 20-fold. The logical thing to do would be to increase the number of cache instances. And therein lies the problemevery time a new instance is introduced, the second variable in the above operation (the number of instances) changes. And when that happens, a key previously mapped to one instance would now be mapped to another. Let me illustrate that. Lets assume there are 10 instances of Memcached. Let me try to store a key/value into this cluster. Let me also assume that the key (Hello) produces a hash of 12356 (hashes are much longerlarge enough to

38 | May 2013
Let's Try
+90 -90












Assume that the hashing function can only create hashes in the range -100 to +100 (it would be a pretty useless hashing function if it had only 201 possible values, but for the sake of demonstration, let us work with it). Now assume that the hash values were the dial of a clock (arranged in a circle just like they are on a clock). So the values would start at -100 at the top and increase clockwise until they reach +100 at full circle (see Figure 1).

Figure 1: Hash values on a circular dial

Figure 2: Memcached nodes on the circular dial along with hashes

+90 -90 A

Now, lets hash the instances and plot the resulting hash (which will be in the range D -100 to + 100) on the dial. Let us assume -70 -70 the instances are at points A, B and C as shown in Figure 2. +50 +50 -50 +50 +50 -50 Also, let the keys hash to the values -70, -30, 10 and 50 (as shown in Figure 3). C +30 +30 Now, to map keys to an instance, move clockwise and assign each key to the B +1 -1 +10 +1 -1 +10 nearest instance that comes after the key. So, in this case, -70 and -30 will go to B, Figure 3: Nodes, Hash of keys on circular Figure 4: Removing a node +10 will go to C and +50 will go to A. dial with Hashes What happens if an instance is removed? Let us assume that the instance B is removed. Then the values -70, -30 and +10 will go to C and the others will remain ensure that there is little collision). So if I were to map it to an as is. Even after removing an instance, only two keys are instance, I would use the following command: re-mapped. The others will continue to be served from the same cache instance. 12356 % 10 = 6 Now let us add another instance (see Figure 4). Say we added D at the location shown in the diagram. What This means that the data for the key Hello would be would happen is that -70, -30 and +10 will still map to C, stored in the instance number 6. + 50 would map to D, and A would have nothing mapped Now let us add a couple of instances, taking the count of to it. Again, you will see that the cache has not been instances to 12. Where would the key Hello map to now? disrupted too much in this case. Consistent hashing is now included in most of the 12356 % 12 = 8 popular Memcached clients. For example, Memcached Java Client, a popular Java client for Memcached, has Because the client will look for the key Hello in the support for it. eighth instance, it will no longer be able to find the value. This is why we use consistent hashing. So what is consistent hashing? Simply put, it is a way References of ensuring that keys map consistently to the same cache [1] instance even when the cache instances are added or consistent_hash.html removed. The caching function does its best to make this [2] scenario possible. But there will be some cache misses. How does consistent hashing achieve that? Simple! It By: Harish Babu hashes the identifier for the caches (typically IP addresses The author, employed by a leading telecom VAS provider, is an and port combinations) with the same hashing function open source fan who likes exploring new technologies. He can be used to hash the key, and then applies a clever trick to map reached at harish (dot) babu (at) gmail (dot) com. the keys to the instances.
+90 -90

Adding and removing instances

May 2013 | 39

Connect with

Indias Leading IT Professionals

For more details about the conference, visit:

Organisers Organisers Media Media Partners Partners

Asias Leading Conference On Open Source

10th Edition

NIMHANS Convention Center

11 - 13 November 2013 BENGALURU

FREEZE Your Calendar NOW!

Nov Nov


Register Now For FREE Complimentary Passes
EFY EFY Enterprises Enterprises Pvt Pvt Ltd, Ltd, D-87/1, D-87/1, Okhla Okhla Industrial Industrial Area, Area, Phase Phase 1, 1, New New Delhi Delhi 110020; 110020; Phone: Phone: 91-11-26810601 91-11-26810601 (02/03) (02/03)


Unlock the Potential of QForms

c:\user\an uj\deskto p\anju.js

How To

Name: anuj Email :

City : Faridabad

000 01000 111 01000 01000 000 11111 01000 01111 000 01000 00000 01111 00000 111 01000 01000 00000 00000 10000 111 010 011 000 100 010 110 000 110 111 000 000 000 111 000 111 100 0 000 100 010 110 000 000 000 000 000 000 100 010 00 111 000 000 111 100 0 110 000 000 00

Easy to publish Quick

Easy to customise Quick to pu blish
Fast to build

uild mise b o to tt Fas to cus

Heres an introduction to simple yet powerful utilities that come with QForm.


er Ent

e nam


t is required that forms check for valid data. Code generated forms come with built-in validation according to the constraints set on the columns in the database. Validation is also needed on other pagesfor example, whether the value put in by the user in the Email field is really a valid email address or not, or if 0SFY is an appropriate name for a person, and so on. QForms allow all this to be done quite easily using their Form_Validate function. This function is created within the class definition of the QForm and is run every time the form is submitted (i.e., every time a QServerAction or QAjaxAction is executed). You can write the code for form validation within this function. Again, it would not be required to write all the validation code in the Form_Validate function. You can set the MaxLength, MinLength and Required properties directly in the QTextBox to get it validated. Often, there is a need for grouping controls. For example, for a customer relationship management (CRM) Web-app, it is important to create a composite QControl to bill the customers. In such cases, it would be best to create a composite control for reuse and write the validation logic in the Validate function of the composite QControl or QPanel rather than in the parent form. The strategy for how and when to use custom QControl can be found at http://
42 | may 2013

Custom QControls

It would be a bit difficult to explain what a custom QControl can do but the example at would certainly impress and convey its scope. Typically, custom QControls are groups of other HTML controls which form a unit. This unit is highly reusable - simply use it where that group of controls is required. QControls allow writing of their own validation logic and their own layout. If you wish, a template file may be specified, which in turn will govern the layout of the control. In short, you have almost all the goodies you get with QForm, except that a QControl would not make up a page but only a set of controls. Once defined, QControl can be used to render the entire control by adding only a few lines. Basically, a variable (a member of the instance of the QForm class) is initialised as an instance of the composite QControl, and that variable is rendered using the HTML template file. If you would rather get your hands dirty, visit

Database-based session handling

Using a database to handle PHP sessions is not really a necessity. However, it can benefit the application as you plan to move ahead. While one server suffices to begin with, a single server, no matter what, has its limits. With the rise in popularity of your apps, sooner or later, more than one server with a

load balancer inbetween would be needed. At that point, you could be faced with a new problem managing user sessions. Normally, PHP manages sessions automatically by using files. These files are not automatically available to other servers with the same app. To resolve this problem, PHP allows you to use a database for storing (reading and writing) user sessions, so that the sessions could be read and written from any of the multiple servers running the same PHP code. Any one of the many guides on the Internet can help you gain an insight into this, but the fact is that in order to use the feature, you just need to set the values of DB_BACKED_SESSION_HANDLER_DB_INDEX and DB_BACKED_SESSION_HANDLER_TABLE_NAME correctly in the file. QCubed will take care of the rest. The importance and meaning of those variables is explained in the file.
How To


All this helps your application scale as and when needed, without requiring you to think about scaling at the very beginning.

Centralised FormState handling

QCubed saves the FormState (state of the application) on the server side to help make coding easier. However, saving the FormState is itself messy! The problem is, to save FormStates, you need a location! There are a few options provided by QCubed, as listed below. 1. QFormStateHandler: Saves the FormState as a hidden variable within the page itself. The advantage is that it does not consume any space on the server side. The disadvantage is that it makes the pages huge and bulky. One should use it only when nothing else works! 2. QSessionFormStateHandler: Saves the FormState as part of the user session. The downside here is that, with time, it uses a lot of session data and can slow down the user session or even crash the session. However, it makes more sense because all FormStates are cleared when the user logs out and the session is destroyed. This option is the default. 3. QFileFormStateHandler: This saves the FormStates as individual files on the server side; this is much better than QSessionFormStateHandler because it does not make the session swell. The disadvantage of this (and QSessionFormStateHandler) is that if there are more than one servers running the same application, no server knows about the FormStates created by the others a curse for a distributed app! 4. QDbBackedFormStateHandler: Create a table with the needed structure (explained in php) in one of your databases, and set the correct values in the configuration file. QCubed will start maintaining the FormStates in the assigned table! Interestingly, if the database-based session handler and QsessionFormStateHandler are used, the FormStates get saved in the database (as part of session data) as well. However, QDbBackedFormStateHandler will operate faster. This is the FormState handler that should be used if a considerable amount of traffic is expected or received and it is required to distribute the application load on more than one server.

We have already mentioned QQuery QCubeds way of querying the databasesand that it automatically escapes the strings before sending them to the database so that it does not have to be done manually. Since queries are not written by hand, it results in better security of the application and the chances of missing that one spot are almost nil. However, security woes are not limited to SQL injection attempts by malicious users. One of the attack techniques called Cross Site Scripting (XSS) does not bother the website itself but can be used to steal data from machines of other users visiting the site or app. As developers, it becomes our responsibility to protect our users from attacks by others. XSS attacks are carried out by inputting valid but dangerous HTML code in those input boxes of a website that will cause the input to be displayed somewhere, e.g., a comment input box would cause the input entered into it to be displayed on at least one page. If dangerous JavaScript is entered into such input boxes, it can result in an output that might seem innocent and harmless, but can extract data from the users browser. With the advent of HTML5s more powerful JS API, it can wreak havoc on the users computer. To prevent this, QCubed comes with built-in XSS prevention. All that needs to be done is to set the CrossScripting property of the concerned textbox to QCrossScripting::Deny or QCrossScripting::HTMLPurifier. The former avoids any dangerous HTML from being entered and raises an exception, while the latter utilises the HTMLPurifier library to filter out the dangerous HTML and purify it. While the defaults set by the QCubed framework are good enough, you can learn more about how to use them in projects, in detail, at

Recent developments

By the time you read this, QCubed 2.2.0 would be out with new improvements: A friendly installer to assist in installing the framework. An improved QQuery with having clause support. An upgraded JQuery UI version. QCubed code moved to GitHub at https://github. com/qcubed/. QCubed API documentation at This is currently a work-in-progress and may not be perfect at the moment, but do bear with it for some time.

By: Vaibhav Kaushal

The author is a core contributor to QCubed. He gives QCubed related advice on his website

may 2013 | 43


or the last couple of months, we have been discussing storage systems. A few of our student readers have written to me, asking for Linux-specific OS questions to be featured in the column. So lets take a break from our discussion on storage systems, which we will continue in next months column.

Sandya Mannarswamy

In this months column, I feature a number of interview questions related to the Linux operating systems internals and programming problems.
6. Recall that operating systems like Solaris/ hp-ux/aix support threads at both the user and kernel level. How does Linux support threads at the kernel level? 7. What are Reserved Page Frames? Why are they required? 8. The Linux kernel has a zone allocator and a slab allocator. What is the zone allocators unit of allocation and what is the slab allocators unit of allocation? Why do we need two allocators? 9. What is the use of the sysenter instruction? How can it be used to speed up system call performance? 10. You are writing a new kernel module that requires the use of floating point instructions. You know that floating point operations in user applications are seamlessly supported. You are told that usage of floating point instructions inside the kernel code is not advisable. Is this correct? If so, why? 11. If you wanted to find out how long the system has been up since the last re-boot, how would you do so? What would your code look like if you are doing this from inside the kernel code itself? 12. Why is a virtual file system layer needed in the Linux kernel? 13. You have written a bad piece of code in which you are trying to de-reference a NULL pointer. If your code was part of a user application, what happens? If your piece of code was part of a kernel module, what happens? 14. What are the different IO schedulers available

Questions based on the internals of the Linux OS

1. How do you configure and custom-tailor your Linux kernel before building the sources? 2. In simple terms, the Linux kernel is nothing but a software program written in C (I wish it were that simple). Many of the common programming functionalities such as printing output, reading input and allocating memory are library routines in the standard C library, and any user application written in C makes calls to standard C library routines and utilises these routines. Why is it that the Linux kernel does not use the standard C library for such functions? 3. What is the significance of using the asmlinkage qualifier on function definitions in the Linux kernel? Give a common example of when this qualifier is used. 4. What are the top and bottom halves of an interrupt handler? What work would you do in the top-half and what work would you do in the bottom half? 5. If, at any point in time, you want to find out how many interrupts of each type have been received in each CPU, how would you do so?

44 | may 2013

Guest Column
with the Linux kernel? What is the difference between a noop scheduler and a deadline scheduler? 15. From version 2.4, the Linux kernel supports a unified buffer cache mechanism. Why is a unified buffer cache better than having a separate page cache and a separate buffer cache? 24. You are given an array A of N integers without duplicates. You are asked to find a pair of indices i and j such that A[i] < A[j] and the difference between i and j is the maximum. 25. You are asked to implement a Stack data structure that supports push and a modified version of pop such that pop removes the most frequently encountered data item in the stack at that point. For instance, given the following sequence of operations: (a) Push 3, push 4, push 4, followed by pop should yield 4 (b) Push 3, push 4, push 4, push 2, push 3, push 3 followed by pop should yield 3. Note that the user can call pop at any point after any number of pushes.

Programming questions

16. You are given a binary search tree and an integer k. Find out if there exist two nodes of the binary search tree that sum up to k. 17. You are given N integers without any duplicates where N is an even number and a value k. You are asked to form N/2 distinctive pairs from these N integers such that the sum of each pair of numbers is divisible by the given k. Write a program to determine whether such N/2 pairs can be formed and, if so, output the N/2 pairs. 18. Write a program to determine whether a given binary tree is a binary search tree (BST). This is one of those Gotcha questions where most folks tend to use the following check: verify at each node of the BST that it satisfies the following condition that the value of the left child is less than or equal to the root and the value of the right child is greater than or equal to the value of the root node. Beware that this is not an adequate check. Can you give an example of a binary tree that satisfies this check at each node and still ends up not being a binary search tree? 19. Write an algorithm to determine the k-th to the last element in a singly linked list. If k is greater than the size of the list, the snippet should return NULL. 20. You are given a set of N elements. You are asked to partition the set into two sub-sets such that the sum of elements of each sub-set is the same. Write an algorithm to determine whether a given set can thus be partitioned. 21. You are given an NXN matrix of integers. Each row and each column are sorted. You are asked to find whether a given integer k exists in the matrix. What is the complexity of your algorithm? 22. You are given a sorted circular array and are asked to find the largest element in the array. Write a program for this. 23. You are given an endless stream of single digit integers. Whenever you see an integer that is repeating for an odd number of times, output the integer value and odd. Whenever you see an integer that is repeating for an even number of times, output the integer and even. What is the best time and space complexity algorithm you can come up with?

My must-read book for this month

This months must-read book suggestion comes from Gaurav. He recommends the book Joel on Software by Joel Spolsky. Joel hosts the programming blog www. According to Gaurav, This book is sub-titled Joel On Software And on Diverse and Occasionally Related Matters That Will Prove of Interest to Software Developers, Designers, and Managers, and to Those Who, Whether by Good Fortune or Ill Luck, Work with Them in Some Capacity and essentially covers topics right from coding to product management. Joel Spolsky is a legend in the programming world and his caustic humor and deep insights come through strongly in this collection of essays about software. Thank you Gaurav, for your suggestion. If you have a favourite programming book/article that you think is a must-read for every programmer, please do send me a note with the books name, and a short writeup on why you think it is useful so that I can mention it in this column. It would help many readers who want to improve their software skills. If you have any favourite programming questions/ software topics that you would like to discuss on this forum, please send them to me, along with your solutions and feedback, at sandyasm_AT_yahoo_DOT_com. Till we meet again next month, happy programming and heres wishing you the very best!

By: Sandya Mannarswamy

The author is an expert in systems software and is currently working with Hewlett Packard India Ltd. Her interests include compilers, multi-core and storage systems. If you are preparing for systems software interviews, you may find it useful to visit Sandyas LinkedIn group Computer Science Interview Training India at groups?home=HYPERLINK home=&gid=2339182&HYPERLINK groups?home=&gid=2339182gid=2339182

may 2013 | 45


The Emergence of Open Source Players in the Database Space

This introductory article takes a sweeping look at the current database domain. The rapid expansion of this domain over the past five years is fascinating and makes for interesting reading, especially since more and more open source DBs are becoming popular.


echnology has been changing rapidly over the years. If we look at the telecom space, we had barely been introduced to mobile phones in early 2000, and now we have so many variants of mobile phones. The third generation of mobile technology is behind us. In the computing space, the bulky PCs and laptops of 2005 have morphed into sleek quad-core phones and tablets. Software has also been keeping pace with these developments, such as the tremendous changes in the operating system space. However, software that has not changed for a long time is in the database space. Until recently (say 2009), and probably for many people even today, the term database implies RDBMS (relational database management system). The relational database is the de-facto standard of databases, and advancements in basic technology have evolved little over the past 40 years. The Apache Foundation's Hadoop signalled the end of this era. Today, the database field is going overboard to catch up with other software domains, trying to make up for lost time. It would be reasonable to say that the database

domain is one of the interesting fields for a software professional to be in today. What has triggered this change? The answer is best summarised by quoting Michael Stonebraker, the great DB scientist. "One size does not fit all," he said. The use case of a database has expanded tremendously in the past few years and trying to fit an RDBMS to all these use cases does not make sense. Let us look at some of the factors driving the evolution of the database. 1. Connectivity has played a major role in the changing DB usage. Ubiquitous Internet connectivity has meant access to a lot of websites and the emergence of social networks like Facebook. These need to host very large databases that demand high performance. At the same time, there is also the need for a less rigid and simple schema for databases. 2. With the increasing power of computing devices and the emergence of Big Data, a need to offer more real-timeness has emerged. For example, there is immense demand for real-time analytics. These kinds

46 | May 2013

of applications require databases to be faster and have a more predictable response. This has also led to databases that can process different kinds of inputs. There is a requirement for real-time analytics for stock market informationa streaming input that is quite different from the traditional DB input. A good columnar inmemory DB will be required to handle this. 3. The usage of databases in different Web applications has meant that the nature of the data stored is very different. This applies to changes in the nature of data types, the relation between data and the nature of queries. A typical example could be LinkedIn, which stores profiles of people and links the many fields of one person to the many other fields of another person. If a traditional RDBMS schema is applied, it would be very difficult to manage it. A good document database will help this cause. There are many more reasons for this evolution. For example, people no longer want to use SQL to operate databases. They would rather use the Web paradigm to operate the databases; so there is a need for databases to support PHP drivers and natively support Representational State Transfer (REST).


Figure 1: DB Space

The increasing importance of open source in the DB space

Along with the transformation in the DB field, another critical aspect is that open source databases have led this transformation. Hadoop was one of the first databases to signal the transformation. Hadoop-Hbase has been rapidly followed by great open source new-generation databases. Today, in the emerging NoSQL field, open source databases rule the roost. This has also led the industry to take a more serious look at older open source databases, the most important being Postgresql. This is the grand old man of all open source databases and in the past few years it has become more and more prominent. In this article, we will take a brief look at the open source offerings in various functional spaces of the DB panorama.

Figure 2: OLTP architecture

The relational DB space

Relational databases (RDBMS) have been the de-facto DB standard for years. The DBMS emerged in the 1970s and has been the subject of a lot of academic research. Its foundation is strongly based on relational-algebra and offers a mathematical basis for its principles. This has made these databases very stable and often quite predictable. Relational databases have been put to use in two broad functional areas: Transaction Processing (OLTP) and Analytical Processing (OLAP). OLTP: This has been the primary strength and the use of databases. OLTP databases offer transaction processing capability. An OLTP system comes into play when we access our bank account or book a train ticket. These databases require a very fast response and, at the same time, very

reliable operation. OLTP systems normally offer ACID (Atomicity, Consistency, Isolation and Durability). OLTP DBs are normally medium-sized (up to a few TB) and are generally disk-oriented. These systems generally handle a mix of reads and writes. A very brief conceptual diagram of an OLTP database is provided in Figure 2. Postgres and MySQL are the dominant open source players in the OLTP-relational database space. These databases are very mature, like Postgres, which has been evolving over the years, and now offers better performance and features. It is easily one of the most functionally rich DBs in the world and can compare favourably with any of its commercial competitors. Many commercial and cuttingedge databases have been spun-off from Postgres -- a good example would be GreenPlum. MySQL (acquired by Oracle now) is another prominent open source database and has been extensively used for Web based applications. One of the most famous deployments of MySQL is Facebook. Emerging OLTP: The OLTP space has remained unchanged for a long time. However, one of the recent strong trends has been to move the OLTP DB to memory for very high performance. This technology is one of the most advanced in
May 2013 | 47



Figure 3: Star schema

Figure 4: Column vs row

the DB domain today. In-memory OLTP database systems are very similar to their disk counterparts except for the algorithms which run inside the system. These algorithms are tuned for memory access rather than for disk access. This is one DB space where there have been very limited offerings from the open source community. One of the most famous open source DBs is VoltDB. This is a clustered in-memory OLTP database. Another open source attempt at in-memory DB space is CSQL. OLAP: Analytical processing is another traditional application of RDBMS. Although it is considered that the relational model is not suitable for analytical processing, the use of RDBMS for analytics has thrived over the years. Of late, analytical processing has been evincing a lot of interest due to the explosion of data and the value people attach to statistical trends for data. Traditionally, the RDBMS used for OLAP processing is very similar to the OLTP RDBMS. The key change has always been the schema. Analytical processing uses a special kind of data schema called a star schema. An extremely simple view of this schema is provided in Figure 3. The fact table in the centre of the schema is very large, spanning millions of rows and many hundreds of columns. The remaining tables (on the star points) are called dimension tables. These focus on one aspect of the data and are relatively small tables spanning a few hundred rows, at best. There has been no specialised open source database for OLAP processing. However, Postgres has been used by many
48 | May 2013

commercial OLAP databases (like Greenplum and Netezza). Postgres has an OLAP extension package called cube, which offers reasonable performance for multi-dimensional queries. Emerging OLAP: OLAP has been drawing a lot of interest lately. In response to this, there have been more innovations in the OLAP field as compared to the OLTP field. The most significant of these innovations has been the column store. In a column store, the data is stored in the column format. Most of the analytical processing happens on the projection of one or two columns. Columnar databases take advantage of this fact. Also, columnar databases can offer very high compression, as it is quite possible that the values in a column repeat. However, a complete row repetition is practically impossible. Columnar databases are extremely fast for aggregation. Figure 4 provides an indicative difference between columnar and row-based databases. One of the most innovative databases in the relational columnar field is MonetDB, which is the result of extensive academic research in the Netherlands. MonetDB includes almost all the cutting-edge concepts of columnar databases (like vectorisation). Another interesting growth in the OLAP space is the emergence of stream processing, which is the processing of input data as the data flows into the system without a requirement to store it. The key advantage of this is the reduction in storage space; however, it puts severe demands on processing performance. Not many open source databases can support stream processing. The ones that do have not been built from scratch to perform stream processing. A couple of key open source players of interest in the streaming space are STORM and Kafka. Though these cannot be considered as independent databases, they are tightly related to the database space.

The non-relational DB space (NoSQL)

The most happening thing in the database space in the past five years is NoSQL. I will not define NoSQL, but will instead briefly examine one variant in the NoSQL space. The non-relational DB space should ideally include semistructured databases (like XML database) and also probably graph databases whose mathematical foundations lie in graph theory rather than relational theory. However, to keep the article readable and in focus, we will look at only two areas that have generated a lot of attention. K-V stores: One of the primary reasons why non-relational databases are generating a lot of interest and commercial attention is the simplification of the database schema. Traditional RDBMS tend to have complex entity-relationships based database schema. In the new generation of Web systems, there are many cases where there is no need to have such complex related schemas. Most of the time, data is nothing but a keyvalue pair. The key could be one kind of data type and its value could be anything. For example, the key could generally be an

integer field, whilst the value could be a number (say an age), a string (say a name), or a picture (say a photo). All these disparate data could be bundled into a single table and be processed. In addition to being flexible and simple, these databases can also scale extremely well. Open source has been most dominant in these databases. Two interesting DBs in this space are Voldemart and Riak. Voldemart, named after the Harry Potter villain, is one of the most deployed KV stores and is used extensively by LinkedIn. Voldemart brings in parts of ACID to the KV stores. It has a good storage layer and also supports MVCC as a consistency model. This makes Voldemart a complete database. Riak is another interesting database in the KV store space because of the way it has adopted the Web language as its front-end in lieu of SQL. Document stores: Another evolving area in the nonrelational database space is document stores. These stores, as the name suggests, store documents similar to the way an RDBMS stores rows in a table. It is essential to understand that the notion of a document is similar to a row in the RDBMS and not a table. Document stores can potentially store different kinds of document types like XML, Word documents, etc. Other than the ability to store, document stores also have the capability to query parts of the document as if they are column values of an RDBMS. The most famous open source implementation of the document store is MongoDB, which brings certain RDBMS concepts to the document stores. This includes indexing and ad-hoc querying. This makes MongoDB very attractive for RDBMS migrations for document storing purposes.


Figure 5: Database distribution

One of the most popular open source relational clustered databases is called VoltDB and falls in the category called NewSQL. OSFY carried a detailed article on NewSQL in its February 2012 edition. Non-relational clustered space: Unlike the relational database space, the current non-relational database space always comes with horizontal scalability requirements. Hence, most of the NoSQL databases scale naturally. The most interesting of all in the clustered non-relational space is HBase. The intriguing part is the way Hadoop is used to produce this scalability. Cassandra, probably the most famous of all the NoSQL databases, is a columnar clustered scalable database.

The clustered DB space

Things to watch

Along with the NoSQL explosion, the other critical trend to dominate database growth has been horizontal scalability. A few years back, the focus was always on vertical scalability, but with Google's bigtable research and Amazon's dynamo research along with the use case of bigdata, horizontal scalability is the in-thing. This requirement has also brought out the famous CAP theorem, which argues that C (Consistency), A (Availability) and P (Partitioning) cannot be achieved at the same time. One or more of them has to be sacrificed. This has led to different modelsa famous one is called eventual consistency. Horizontal scalability offered in the database space is through clustered databases. These concepts are mentioned in this article to interest the reader to go back and dig deeper. Relational clustered space: The relational database space has adopted clustering seriously. One of the best adoptions of clustering in the relational space is by MySQL, in its NDB cluster implementation. This is a shared-nothing implementation, which offers scalable clustering. Another interesting attempt at clustering is by a MySQL variant called Drizzle. The third one is not a full database but a replicator, known as the tungsten-replicator. This works on the MySQL database to provide very fast and scalable replication.

A lot of technologies, jargon and open source products have been introduced here, to bring about reader awareness. The database space continues to expand rapidly. One of the areas to really watch out for is the OLTP+OLAP merger brought about by SAP HANA. Another area to watch is big-data analytics and what open source databases may want to offer in this space. The complexity now is that there are too many database product options that have very marginal differences. The current state is such that most of the databases are building blocks. A DB like Riak offers only part of the database, and a store like Memcached needs to be used with it; sometimes, an external replicator also needs to be used. This is quite different from an integrated Big-Bang solution like, say, Oracle RAC. This integration makes these systems flexible, but it also makes the space a bit more technical. So I expect there will be binding products or natural plug-in architectures that may bring these databases together. By: Prasanna Venkatesh
The author is a lead architect and the head of a DB architecture team in a top telecom MNC. He has wide experience in the middleware software domain and currently specialises in the internals of emerging databases.

May 2013 | 49


Heterogeneous Parallel Programming: Dive into the World of CUDA

A previous article in this series titled Introducing NVIDIAs CUDA' covered the basics of the NVIDIA CUDA device architecture. This article covers parallel programming using CUDA C with sequential and parallel implementations of a vector addition program.

How To

arallel programming and general-purpose GPU computing are some of the hottest trends in computer science today due to the decreased prices of multi-core systems and the increase in compute efficiency. Various parallel programming languages like OpenCL and CUDA have been developed and evaluated over the years. This article will cover the basics of CUDA C, invoking kernels, threads and blocks with a vector addition program and it aims to give you an insight into beginning programming on your CUDA device. A few important points before we begin. To use CUDA on your system, you will need to have the following installed: 1. CUDA-capable GPU hardware 2. A supported version of Linux with a GCC compiler and toolchain 3. NVIDIA CUDA toolkit and drivers It is presumed that if you already have a CUDA device within your system, you probably have the latest toolkit and drivers installed and configured correctly. In case you do not have the NVIDIA CUDA drivers configured or you have recently upgraded your hardware with a CUDA device, you could follow the simple steps given below to configure your device. 1. Download the toolkit from the NVIDIA website, available

at no cost from: Select the correct product release depending on your operating system preferences. This download contains an all-in-one package, which includes the CUDA toolkit, SDK code samples and the required drivers. After downloading, follow the steps in the NVIDIA guide to install the drivers, CUDA samples and the toolkit, at: Started_Guide_For_Linux.pdf. This guide will help you set up the complete environment on the system. In this article, I will cover the serial and parallel versions of a vector addition program. Once you have understood the basics of the parallel vector addition program, you could use the concepts pretty well in parallelising other algorithms as per your requirements. First, lets write a simple C program to perform vector addition of two arrays. Open your favourite editor and write a simple vector addition code that looks like whats shown below:
#include<stdio.h> static const int N=100;

50 | May 2013

//Add the vectors and store result in vector C void vector_add(int *a,int *b, int *c) { int i=0; for (i=1;i<=N;i++) { c[i]=a[i]+b[i]; } } int main() { int a[N], b[N], c[N]; int i=0;
How To
t(1) 1 + B 2 t(2) 2 + 4 t(3) 3 + 6 A

t(100) 100 + 200

t(99) 99 + 198



Figure 1: Sequential vector addition

//Initialize the vectors with values from 1 to 100 and its double in another array { a[i]=i; b[i]=2*i; } //Call function vector_add to display the result vector_add(a,b,c); //Print the resultant array. for (i=1;i<=N;i++) { printf("%d %d %d\n\n", a[i],b[i],c[i] ); } system("pause"); return 0; }

exits the program. It is a single-threaded execution. The figure above would make the serial execution concept clear, where t(x) represents the time slot of execution. Now, CUDA gives us the functionality to perform the same operation in parallel. What it does essentially is offload the data parallel sections to the GPU device and send the result back after computation. In what follows, you will get an insight into launching kernels, writing a device and host code, and performing the same serial vector addition program given above, in parallel. Here is a simple vector addition code in CUDA.
#include <cuda.h> #include<stdio.h> #define N 100 #define numThread 1 // in this example we keep one thread in one block #define numBlock 100 // in this example we use 100 blocks __global__ void vector_add( int *a, int *b, int *c ) { // keep track of the index int tid = blockIdx.x; while (tid < N) { c[tid] = a[tid] + b[tid]; tid = tid+ numBlock; // shift by the total number of blocks, i.e. 100 in our case } } int main( void ) { int *a, *b, *c; int *dev_a, *dev_b, *dev_c; // a = b = c = allocate the memory on the CPU (int*)malloc( N * sizeof(int) ); (int*)malloc( N * sizeof(int) ); (int*)malloc( N * sizeof(int) );

This very simple serial vector addition program creates two arrays of integer values, and adds them using the vector_add function. Compile the code using the following command:
gcc sequential_vector.c o sequential

Run it using the command given below:


In the above program, the processor runs each task sequentially, one after the other. Looping in the above program works sequentially as it starts with the first index and computes consequentially till the last index, and then

//Initialize the vectors with values from 1 to 100 and its double in another array for (int i=0; i<N; i++) { May 2013 | 51

a[i] = i; b[i] = 2 * i; }

How To

__global__ void vector_add(int *a, int *b, int *c) // keep track of the index int tid = blockIdx.x; while (tid < N) { c[tid] = a[tid] + b[tid]; tid = tid+ numBlock; // shift by the total number of blocks, i.e. 100 in our case } }

// allocate the memory on the GPU cudaMalloc( (void**)&dev_a, N * sizeof(int) ) ; cudaMalloc( (void**)&dev_b, N * sizeof(int) ); cudaMalloc( (void**)&dev_c, N * sizeof(int) ); // copy the arrays 'a' and 'b' to the GPU cudaMemcpy( dev_a, a, N * sizeof(int),cudaMemcpyHostToDevi ce ); cudaMemcpy( dev_b, b, N * sizeof(int),cudaMemcpyHostToDevice ); vector_add<<<numBlock,numThread>>>( dev_a, dev_b, dev_c ); // copy the array 'c' back from the GPU to the CPU cudaMemcpy( c, dev_c, N * sizeof(int),cudaMemcpyDeviceToHost ) ; //Prints the results for (int i=0; i<N; i++) { printf("%d %d %d \n\n", a[i],b[i],c[i] ); } // free the memory we allocated on the CPU free( a ); free( b ); free( c ); // free the memory we allocated on the GPU cudaFree( dev_a ) ; cudaFree( dev_b ) ; cudaFree( dev_c); return 0; }

As shown above, add a __global__ qualifier to the function vector_add. Notice that there are very few changes in the function vector_add of the serial and parallel sections. The __global__ qualifier indicates that this is a device function that would be called from the host. blockIdx is a built-in CUDA runtime variable, which is a three-component vector to identify threads in a one, two and three dimension index. Imagine a block as a 3-D matrix and to access the different components in this vector, use blockIdx.x, blockIdx.y and blockIdx.z. In this code, we will be using 100 blocks with a single thread on every grid, which will be seen while we analyse the host code, and hence we use only blockIdx.x which returns the current block number. The condition while tid<N checks that the bounds for array computation have not been reached and computes the array sum taking the block value as an index, i.e., tid. Add numBlock to the tid value, to shift the index by the number of blocks, as each block would be computing just one array index, and we have 100 blocks for 100 array indexes. This explanation pretty much sums up the device code for the program. Now move on to the host code, which prepares the GPU for execution and invokes the kernel. It works by allocating memory to the GPU and CPU, transfers the input vectors to the GPU, launches the kernel and transfers the result back to the host (CPU).
int main( void ) { int *a, *b, *c; int *dev_a, *dev_b, *dev_c; // allocate the memory on the CPU a = (int*)malloc( N * sizeof(int) ); b = (int*)malloc( N * sizeof(int) ); c = (int*)malloc( N * sizeof(int) ); // fill the arrays 'a' and 'b' on the CPU for (int i=0; i<N; i++) { a[i] = i; b[i] = 2 * i; } // allocate the memory on the GPU cudaMalloc( (void**)&dev_a, N * sizeof(int) ) ; cudaMalloc( (void**)&dev_b, N * sizeof(int) );

Lets begin with analysing each part of the code, and then compile the code to get our results. From the previous article (Part 1 of this series), you know that CUDA programs execute in two placesthe host (your CPU) and the device (GPU). You might be a bit surprised by the fact that writing the device code is much simpler than writing the CPU host code. Hence, lets begin with analysing the device code first. Since we have 100 array values in this code, to simplify things, lets have 100 blocks (kernels) running simultaneously, where each kernel runs a single thread. Hence, lets set numThread to 1 and numBlock to 100, and use these variables later while calling the device from the host. The device code is:
52 | May 2013

cudaMalloc( (void**)&dev_c, N * sizeof(int) );
How To
Block 1 t(1) Block 2 t(1) 2 + B 2 || C 3 4 || 6 Block 3 t(1) 3 + 6 || 9 t(1) 99 + 198 || 297 A 1

Block 100 t(1) 100 + 200 || 300

Block 99

// copy the arrays 'a' and 'b' to the GPU cudaMemcpy( dev_a, a, N * sizeof(int),cudaMemcpyHostToDevi ce ); cudaMemcpy( dev_b, b, N * sizeof(int),cudaMemcpyHostToDevice );

As in the above code, similar to the allocation in C, variables a, b and c are allocated memory on the CPU. cudaMalloc() is a standard sub-routine of the CUDA API to allocate memory on the device. It works similar to the C malloc() function we used earlier. Since you cannot modify the memory allocated to the device from the host directly in CUDA, cudaMemcpy() is used to transfer data to the device. This method takes a pointer to the local memory, a pointer to the GPU memory being copied to, the number of bytes that will be copied, and a flag which determines the direction of the memory transfer, respectively.
vector_add<<<numBlock,numThread>>>( dev_a, dev_b, dev_c );

Figure 2: Parallel vector addition


This line is a call to the device kernel from the host to execute the function on the device. It is similar to the serial function call with some additional code. Blocks are organised in three dimensional grids and threads are organised in three dimensional blocks. numBlock and numThread are passed as arguments to let the device know about the structure to be adopted for computation.
cudaMemcpy( c, dev_c, N * sizeof(int),cudaMemcpyDeviceToHost ) ;

This copies the resultant data back from the GPU to the host. As you can see, it is similar to the cudaMemcpy() we used above, with the last variable being changed to cudaMemcpyDeviceToHost to indicate data transfer is between device to host. The rest of the code is pretty much self-explanatory, except that you use cudaFree() to free the memory allocated to the GPU. Now that you understand the code well, compile the code to verify your results. Save the code and, and type the following command in the terminal:
nvcc o parallel

You may wonder how the GPU code can perform about 100 times faster than the CPU code, since we have created 100 blocks that are executing in parallel. This is not the case, since there is an overhead involved in copying data between the CPU and the GPU and the resultant data back to the CPU. Hence, CUDA is generally used for computing algorithms that are significantly data intensive, as it would then make sense to spend some time for data transfer. GPUs are, therefore, generally known as data intensive computational devices. As a next step, you could try programming matrix addition on the GPU in parallel to get a good grip on kernels, threads and parallel execution. Your best companion for this would be the links and the books mentioned in the References section at the end of this article. I also recommend you visit the NVIDIA website and documentation, as it will give you a good idea about the power of CUDA if you are not already impressed by what this simple GPU device on your laptop can do. Next up in this series, I might cover an advanced CUDA program with multiple threads and blocks on a grid, and analyse the running time of the code. I will follow it up with a discussion on OpenACC and other simpler parallel programming models that have come up recently. Till then, start thinking of algorithms in parallel. The world of parallel computing is here to stay! References
[1] CUDA C Programming Guide by NVIDIA; http://docs.nvidia. com/cuda/cuda-c-programming-guide/index.html [2] CUDA Application Design and Development by Rob Farber [3] Programming Massively Parallel Processors by David B. Kirk and Wen-mei W. Hwu

Now, to execute the code, run the following command:


If you have followed this guide correctly, it will print the vectors a and b along with their additional resultant c. This would verify that the first GPU code which you wrote has worked correctly. Still confused? Well, have a look at Figure 2, which will give you a clear understanding of how things work in parallel, in this case.

By: Tejaswi Agarwal

A FOSS enthusiast, the author is passionate about compute power utilisation, run time and memory utilisation of algorithms. Some of his research areas are computer architecture, parallel programming and performance engineering. He can be contacted at

May 2013 | 53


Monitoring and Graphing Your Network With Cacti

Let's Try

Cacti, an open source network graphing application, utilises RRDTool, a data logging and graphing system for time series data. Read on to learn more about it.

acti monitors both the incoming and the outgoing ADSL traffic on my Cisco 877W ADSL router. In the absence of a router that supports SNMP, your own Linux machine or another device can be monitored. At the time of writing, the latest Cacti version is 0.8.8a.

;date.timezone = date.timezone = "Europe/Athens"

Installing Cacti

The next step is to create a Cacti MySQL user (called cacti by me) as well as a Cacti database (also called cacti by me). I used the following commands:

Your Linux distribution probably includes a ready-to-install Cacti package. PHP and MySQL should be already installed and working. If they are not working properly, the setting up of Cacti will not finish. The following steps are required to set up Cacti: 1 Ensure that the date.timezone variable has been defined inside the php.ini file (mine is /etc/php.ini) in order to avoid several warning messages during installation. Check it using the following command:
$ grep -i TimeZone /etc/php.ini ; Defines the default timezone used by the date functions ; 54 | May 2013

mysql> create database cacti; Query OK, 1 row affected (0.04 sec) mysql> CREATE USER 'cacti'@'localhost' IDENTIFIED BY 'password'; Query OK, 0 rows affected (0.06 sec) mysql> grant CREATE, INSERT, SELECT, DELETE, UPDATE on cacti.* to cacti@localhost; Query OK, 0 rows affected (0.02 sec) mysql> grant CREATE, INSERT, SELECT, DELETE, UPDATE on cacti.* to cacti; Query OK, 0 rows affected (0.00 sec)
Let's Try
user/password */ $database_type = "mysql"; $database_default = "cacti"; $database_hostname = ""; $database_username = "cacti"; $database_password = "password";


Edit file cacti/include/config.php and update it with your MySQL related information. Mine is as follows:

/* make sure these values reflect your actual database/host/

Figure 1: Cactis console tab

Import the Cacti database (included in a file called cacti.sql that is provided by Cacti) using the following command:

$ sudo cat cacti.sql | mysql5 -u root p cacti

You can check if the required Cacti tables were created by using the following MySQL commands:

Ensure Apache has access to and knows about the Cacti directory. For the first task, both the user and group owners of the Cacti directory may need to be changed using the chown command. The user and the group of the Web server process should own the Cacti directory. For the second task, the Cacti directory should either be added inside the root directory of your Web server (defined by the DocumentRoot variable) or httpd.conf should know about it (using the Alias command). 7 Before setting up Cactis Poller to run as a cron job, it must be run manually as follows and the output watched for possible error messages:
$ sudo -u www /usr/bin/php cacti/poller.php 06/28/2012 11:56:07 AM - POLLER: Poller[0] WARNING: Cron is out of sync with the Poller Interval! The Poller Interval

mysql> use Cacti; mysql> show tables;

Figure 2: Adding the Cisco 877W router

May 2013 | 55


Let's Try

Figure 3: The available Cisco 877W interfaces

is 300' seconds, with a maximum of a 300' second Cron, but 1656 seconds have passed since the last poll! 06/28/2012 11:56:08 AM - SYSTEM STATS: Time:0.1121 Method:cmd.php Processes:1 Threads:N/A Hosts:2 HostsPerProcess:2 DataSources:0 RRDsProcessed:0

snmpwalk path: /usr/bin/snmpwalk snmpget path: /usr/bin/snmpget RRDtool path: /opt/local/bin/rrdtool PHP binary path: /usr/bin/php

If everything is OK, Poller should be added to cron in order to run automatically. I needed to put it in the www users crontab because, on my machine, Apache runs as a process that is owned by the www user. You can do it as follows:

$ sudo -u www crontab e */5 * * * * /usr/bin/php /opt/local/share/cacti/poller.php > /dev/null 2>&1

When asked for the type of installation, select New Install and click Next >>. Correct the path of the RRDTool, do not change any other value, check that the PHP executable path is correct and click Finish. You are now ready to use Cacti via the URL http:// localhost/cacti/index.php. The first thing Cacti asks you to do is to change the password for the admin user. After successfully setting the new password, you will see Cactis console tab (Figure 1).

Installation is over and done with but, as you can see, the installation of Cacti is a little tricky; so you should be very careful during the process.

Running Cacti

Configuring Cacti

After installing and configuring Cacti, you are ready to add devices and graphs to it. Feel free to view the cacti.log file, which (for my installation) is located inside /opt/local/share/ cacti/log for error messages.

Now it is time to configure Cacti. I opened the URL http://localhost/cacti/install/index.php. The default password is admin/admin. When asked, I used the following information (change the full path of your commands if needed):

Monitoring a Cisco 877W router

What you need to know in advance in order to monitor your Cisco router with Cacti is the name of the SNMP community (CactiCom), and the hostname or the IP address of the router (cisco).

56 | May 2013
Let's Try


Figure 4: Creating the graph for the Cisco ADSL router

Figure 5: The Cacti graph that was created

The steps for adding the Cisco router are the following: Select Devices on the left. Select Type: Cisco Routerand Status Enabled and then click Add. You will see whats shown in Figure 2, and you will have to fill in the information. The most important fields are the Hostname (cisco in my case), the SNMP Community string (CactiCom in my case), and the SNMP Version in which you will have to select Version 2. Then click Create. On the next screen, you will have to press the Create Graphs for this Host on the upper right side. You will then see Figure 3. This figure lists all the available Cisco interfaces for this particular router. What interests us is Interface No 14 (Dialer1), which is the ADSL Internet connection interface. The desired graph type should be In/Out Bytes. Select your interface of choice and click the Create button. Another interesting interface is Number 5 (Dot11Radio0), which is Ciscos Wi-Fi interface. Then, select Graph Threes from the left menu and click Add. Select the options that you can see in Figure 4. Then click Create. Select the Graphs tab and then, from the Default Tree, select the desired host. You will have to wait a little, until some data is obtained for the graph to be created. You will see

something similar to Figure 5, as the graph will take a little while to get populated. If you click on the graph, you will get daily, weekly, monthly and yearly graphs. The Graph tab is the key screen for end users to view the graphs for their devices and to change settings. Cacti is a very capable tool but in order to harness its power you need to experiment with it. An easier tool to configure is MRTG, but it has less capabilities. Cacti also allows you to use existing plug-ins or create your own. Plug-ins allow developers to generate additional Cacti features without changing Cactis source code. I strongly recommend learning both Cacti and MRTG in order to use the right tool for the right job. Web links and bibliography
[1] Cacti: [2] Cisco MIBs: cmtk/mibs.shtml [3] SNMP RFCs: [4] RRDtool: [5] MRTG: [6] Cacti plug-ins:

BY: Mihalis Tsoukalis

The author enjoys photography, UNIX administration, programming iOS devices and creating websites. You can reach him at or @mactsouk.

May 2013 | 57


A Look at the Top Three Network Monitoring Tools


This article talks about a few important aspects of network monitoring, and compares three leading tools that are of great importance to IT administrators.
n a well managed IT infrastructure, network monitoring acts as the eyes and ears of an organisation to spot problems before they appear. Systems administrators need complete visibility into their critical IT components such as servers, applications and networks. These tools can monitor a server crash, a failing application, or in some cases, highly utilised network bandwidth. Auto discovery: It is cumbersome for administrators to add each managed device manually. Modern monitoring tools span the entire network segment to enumerate devices and perform auto discovery of operating system, configuration and settings. This feature automatically helps admins to get a glimpse of their IT inventory. Network traffic stats: Earlier, monitoring tools used to just look at the CPU, memory and disk utilisation. However, this is not enough and network bandwidth usage is a key factor to be aware of, especially when the managed machines are supposed to access the Internet. Besides, by monitoring network traffic, admins get an insight into the bandwidth usage of the Internet service providers line, which helps them make a capacity planning decision. Log monitoring: All operating systems create activity logs. For example, in case of Linux, SSH logs and bash logs are created, while for Windows, the application, system and security event logs are generated. A good tool must be capable of reading and parsing log files. This sounds easy but can be tricky, because the operating system opens log files and locks those that require tools to sneak into the file without tampering or corrupting it.

Features of network monitoring tools

A network monitoring tool is usually hosted on a standalone server and runs its client software on each machine to be managed or monitored. The tool usually runs its own copy of the database such as MySQL or Postgres, which stores all scripts, historic events and actions. In some modern tools, an agent is not required to be run on managed machines, making it an agent-less installation. Table 1 lists a stack of features with examples, which must be available by default in a network monitoring tool. When it comes to monitoring large scale IT infrastructures, systems administrators need architecture with much-advanced features to make their life easy. Given below is a list of some important features.
58 | May 2013

Monitoring tools should be able to check log file size, parse text for particular string patterns, etc, and perform configured actions. This gives a lot of power to admins to tune their infrastructure monitoring for better control. Device grouping: This is important for easy management of devices such as firewalls, servers, etc, in specific groups. In some cases, administrators choose to create department wise groups, or a group for each building or floor. They populate these groups with network switches, servers and desktops pertaining to that department or floor. In a growing infrastructure, this feature is very important. Alert management: Merely monitoring a network is not enough. A good tool should let admins produce alerts. For example, if the CPU of a critical server crosses 90 per cent of usage, or if a firewall is dropping multiple packets in a row, it should create a trouble ticket, and email, or optionally send a short message to the admins mobile phone. Almost all tools provide such facilities today to enhance their usefulness; however, admins should look into the configurability and facilities available in alert management, prior to selecting the proper tool. Customisable Web dashboard: A good monitoring tool should let admins access its statistics over a Web interface. Besides, the Web interface must be customisable to let them decide what should be on the dashboards front page. Modern tools provide widgets that are small screen sections or windows, which can show monitoring statistics of the admins choice and can be moved or removed. Integrating with helpdesk: Recording of events that are the result of threshold violation is very important and should be an automated process. The monitoring tool should provide the necessary hooks or connectors so that the trouble ticket/helpdesk system can be easily connected. Monitoring of events and the applicable actions should result in a trouble ticket. This helps decide how much manpower ought to be utilised to address those events and intelligent action can be taken based on that data. Report generation: All monitoring tools today provide some level of report generation, which is based on the date, time, etc. However, a detailed report that is device-specific or event-specific is really essential for an admin. For example, a report generator should be able to drill down into a particular event such as a TCP timeout on a particular server, and provide historic occurrences of that event for that server. These levels of granular details help administrators establish a co-relation between the event and its root cause. The following is a list of new features found in commercial monitoring tools; however, the open source world will surely catch up in the days to come. Plug-in API support: While a few open source tools do provide this, there is still a scope for improvement. API calls of the monitoring engine can be exposed in a secure way, so that developers can write their own plug-ins. This is especially important when there is a new network device or
Table 1


Layer Application

Feature examples App-specific log check App run/hung state App performance Connectivity Performance Query optimisation and response Task performance Security events OS service run/hung state URL query string monitoring Web services response Web service up/down CPU utilisation Memory utilisation Disk quota utilisation Packet drops and anomalies Network utilisation Bandwidth usage


Operating system




software application in the market that must be monitored. Trend analysis: The network or server monitoring industry is rapidly moving away from the preventive to the pro-active mode. Administrators want to know historic trends of problems and make a judgment in terms of the corrective actions to be taken today, to prevent problems that might happen tomorrow. For example, continuously high CPU utilisation on a MySQL server over a period suggests that one or more stored procedures are either not optimised or misbehaving. This can be related to the application that uses those procedures. Thus, if that application is expected to be used more, an analysis of the trends can tell admins that the MySQL server is going to run into trouble. Security monitoring: Very soon, no monitoring tool will be useful unless it supports cyber security monitoring. Attacks happening at Layer 2 and 3, as well as applicationbased security problems at Layer 7, should be trapped and reported by a good monitoring tool. This functionality is available in a few commercial tools; however, incorporating Snort along with Nagios or any other monitoring tool can prove to be a powerful security monitoring solution.

Nagios, Zenoss and Zabbix

So lets talk about the three famous open source monitoring toolsNagios, Zenoss and Zabbix, and compare them. While there are many features to compare, we will discuss only those that matter the most to mid-scale IT infrastructure management. Nagios: This is a famous first generation network monitoring tool and is used in all Linux distros. Developed in C and PHP, it supports multiple flavours of open source backend databases, as well as the legacy flat file structure. Zenoss: Written using Python scripting, Zenoss provides
May 2013 | 59


A comparison of Nagios, Zenoss and Zabbix

Feature Basic features (CPU, disk, memory) Auto discovery Licence Inventory support Plug-in support Web dashboard Windows monitoring SNMP trapping Syslog monitoring Trend analysis Google Maps View Graphical reports User-friendly configuration Performance and reliability Plug-in API support Security monitoring

Nagios Yes Partial Free No Yes Good Partial Partial Partial Partial No No Yes Medium Partial No Zenoss Yes Yes Free Yes Yes Excellent Yes Yes Yes Yes No Yes Partial High Yes No Zabbix Yes Partial Free Yes Yes Excellent Yes Yes Yes Partial Yes Yes Partial Low Yes No

a highly flexible monitoring platform for mid scale and large scale infrastructures. It supersedes Nagios in a few cases, especially when it comes to alert management. Zabbix: This is really an enterprise class open source tool. Written in C and PHP, it has very elaborate dashboards that provide admins a detailed drill down. While it is tough to decide which tool is best for monitoring, here are a few guidelines. Administrators should first look at their infrastructure from the uptime perspective and decide what needs to be really monitored, rather than checking all that they can possibly monitor. This focused approach is important because it is easy to get distracted with multiple features available in each tool. Hence, focusing on the basic monitoring requirements mentioned earlier should be first on the agenda. As a second step, admins should look into the applications to be monitored and decide whether or not custom scripting needs to be done to achieve what they need from the monitoring standpoint. The third step should be to focus on reporting and trend analysis, because as infrastructure grows, it is essential to have a historic record of the problems in IT infrastructure. The last but important step would be to see if security monitoring is a requirement in the given scenario. If yes, then it is crucial to decide the level of additional scripting and log generation that would be required. The generated log can

then be captured by a monitoring tool and report as a problem incidence via the trouble ticketing system. Nagio, Zenoss and Zabbix are all industry grade, professional tools with large installation bases. It has been observed that Nagios and Zenoss perform very well on the Ubuntu platform, while Zabbix runs great on other distros. Zenoss is unique among the three tools compared, because it offers more features, interacts well with multiple databases and other tools, and also has proved itself to be a robust solution even for high performing, large scale IT infrastructures. Besides these three, there are tools such as Cacti, OpenNMS, Cricket, etc, which I leave readers to find more about on the Net. It is always better to compare an open source tool with a commercial one, and decide and choose the required features. By: Prashant Phatak
The author has over 22 years of experience in the field of IT hardware, networking, Web technologies and IT security. Prashant runs his own firm named Valency Networks in India (www., which provides consultancy in IT security design, security penetration testing, IT audits, infrastructure technology and business process management. He can be reached at


Your favourite Magazine on Open Source is now on the Web, too.
Follow us on Twitter@LinuxForYou

60 | May 2013


Linux Firewall: Executing Iprules Using PHP

Let's Try

A firewall keeps the network secure by analysing packets and determining whether they should be allowed through or not. This article shows how to manipulate Iprules using PHP scripts.

inux has its own firewall that contains iptables that perform packet filtering and set up masquerading. Iptables is a user space command line program. It stores the set of iprules and ipchains to configure the Linux firewall. Ipchains is a set of commands stored in the iptables space. Now lets look at a simple iprule, which demonstrates how to stop an incoming echo request. To do so, the following iprule is used: iptables I INPUT p icmp --icmp-type echo-request j DROP. In the above iprule, -I parameter indicates the insert rule at the beginning in ipchain, -p parameter indicates protocol, while --icmptype parameter has two possible values: echo-request and echo-reply. Since you want to stop incoming ping requests, echo-request is chosen; the -j parameter indicates a jump to a specified target when the packet matches the rule. For more details about iptables and iprules, go to the link indicated in reference [1]. After executing the above rule, the machine does not take requests from other machines in the network. To reduce the burden of remembering various iprules, you can have a GUI that takes only parameter values according to packet processing by the firewall, form a rule and execute it by PHP. But the PHP page will be executed by a Web server, so the Web server must have sudo powers to make changes in iptables by executing PHP scripts. This article shows you how to give sudo power to the Apache Web server, the changes required and how to execute them. In the boot folder, the kernel configuration file is required to be changed. The CONFIG_NETFILTERING option is used when the

computer works as a firewall or router. If CONFIG_NETFILTERING is commented then uncomment it and set the value of CONFIG_ NETFILTERING=Y. If the CONFIG_NETFILTERING attribute is missing then add it and compile the kernel by executing the following three commands: make make_modules make install Disable se-linux firewall and another firewall. Open the visudo file and make Apache (the Web server used to execute PHP scripts) a sudo user: Apache ALL = (apache) NOPASSWD: ALL, (root) /sbin/iptable In the example given below, the user interface shown in Figure 1 takes parameter values from the user and passes them to the PHP page. The code given below shows how to execute iprule in iptables:
<?php $id = $_POST['ruleid']; //rule no. $op1 = $_POST['s']; //chain option in case of filtering $op2 = $_POST['s1']; // chain option in case of NAT $tab = $_POST['table']; //filtering or NAT echo "<pre>"; if($tab == "filter") {

62 | May 2013

Enter rule no. that you want to delete

Let's Try
deleted');</script>"; include("deleterule.php"); exit(); } ?>



Delete Rule

Figure 1: Delete a rule from iptables

$cont = "sudo iptables -D $op1 $id"; shell_exec($cont); print "<script>alert('Rule $id from $op1 is deleted');</script>"; include("deleterule.php"); exit(); } else { $cont = "sudo iptables -t nat -D $op2 $id"; shell_exec($cont); print "<script>alert('Rule $id from $op2 is

$_POST is an associative array used to get the form fields value if the form data is transferred via the post method. To execute a formed string (iprule) on the basis of entered parameter values, the shell_exec() function is used. The above code is used to delete a specific input chain rule, an output chain rule or forward chain rule by filtering and pre-routing or post-routing the chain rule from NAT by giving a rule number. References
[1] Quick_HOWTO_:_Ch14_:_Linux_Firewalls_Using_iptables#. UVGokheovX9 [2]

BY: Krunal Patel

Krunal Patel is an assistant professor at A. D. Patel Institute of Technology. He has been involved in teaching open source technologies for the last four years. He can be reached at

Statement about ownership and other particulars about linux For You FORM IV (See Rule 8) 1. Place of publication 2. Periodicity of its publication 3. Printers Name Nationality Address : : : : : New Delhi Monthly Ramesh Chopra Indian lINux FOR YOu D-87/1, Okhla Industrial Area, Phase-1, New Delhi 110020 Same as (3) above

4. Publishers Name Nationality and address 5. Names and addresses of individuals who own the newspaper & partners or shareholders holding more than 1% of the total capital

EFY Enterprises Pvt ltd D-87/1, Okhla Industrial Area, Phase-1, New Delhi 110020

I, Ramesh Chopra, hereby declare that the particulars given above are true to the best of my knowledge and belief. Date: 28-2-2013 Ramesh Chopra Publisher

May 2013 | 63


A Peek Into Some Cloud Monitoring Tools

Cloud monitoring tools collect data and illustrate patterns that might be difficult to spot otherwise in a dynamic infrastructure environment, in which services are provided by Cloud Service Providers. Read on to know more...

Cloud Corner

rganisations are keen to leverage cloud computing to improve the agility and scalability. Conversely, those that have adopted cloud computing face the following challenge: decreased visibility into the performance and governance of services being delivered to their end users. Cloud Service Providers (CSP) do provide dashboard facilities to track the status of their services; in addition, they provide alerts and notification mechanisms to recognise and report service outages in a timely manner. Customers also need to know the status of the applications on the cloud, and hence the need for continuous monitoring. Cloud monitoring refers to the monitoring of the performance of physical or virtual servers, storage systems, networks, and the applications running on them. Cloud monitoring tools can collect data and illustrate patterns that might be difficult to spot otherwise. To maintain high availability of applications, cloud monitoring tools can be used to collect metrics, gain

insights and perform corrective measures (in case required) to guarantee business continuity.

Cloud monitoring is essential for controlling and managing cloud resources and software infrastructure; it provides key performance indicators (KPI) for infrastructure, platforms, and applications. Given below is a list of parameters that can be monitored by various open source tools, if not by a single solution. Application and cloud server response time Number of concurrent users Application and cloud server availability Network Latency Cloud service utilisation Overall bits/sec and requests/sec served by all of the processes

Monitored metrics

64 | May 2013

Response time for specific transactions Memory usage Disk usage CPU utilisation System load Network interface activity Database activities Swap space Performance of attached disks
Cloud Corner


is the monitoring of resources that manage application performance in cloud environments. It includes supervision of applications to maintain optimal performance and availability. Billing: Monitoring is a very basic requirement to provide measured services in the cloud environment for CSPs and cloud consumers.

Open source tools for cloud monitoring

Zenoss is an open source platform released under the GNU General Public License (GPL) version 2. It provides an easyto-use Web UI to monitor performance, events, configuration, and inventory. Zenoss is one of the best for unified monitoring since it is cloud agnostic and is open source. Zenoss provides powerful plug-ins named Zenpacks, which support monitoring on hypervisors (ESX, KVM, Xen and HyperV), private cloud platforms (CloudStack, OpenStack and vCloud/vSphere), and public cloud (AWS).

Why cloud monitoring is essential

Cloud monitoring is necessary for a number of reasons. The more important ones are listed below. Security: Security in the cloud is a major roadblock in cloud adoption for business critical applications and in certain industries in which data security is extremely vital. SLA management: Due to the dynamic nature of the cloud, continuous monitoring on QoS attributes is essential to implement SLAs; and the multifaceted nature of the cloud landscape demands a refined means of managing SLAs. Capacity planning: Cloud monitoring tools help to identify what resources you are using in the cloud. The performance and capacity requirements can be assessed and, accordingly, resources can be scaled up and down to effectively achieve performance levels and satisfy customers. Resource management: Resource management is crucial in cloud computing. It maximises resource utilisation and minimises the total cost of both the cloud infrastructure and application hosting. Trouble shooting: Root cause analysis is a challenging area in cloud computing, considering the involvement of various components and the complex architecture. Cloud monitoring tools can help to diagnose and rectify an issue. Performance management: Cloud performance management


SpringSource is a division of VMware that has acquired Hyperic, Cloud Foundry, RabbitMQ, and Gemstone. Hyperic can be used for the auto-discovery of all components of virtualised applications. It automatically discovers, manages and monitors IT and network resources on the private cloud (VMware) and public cloud (Amazon Web Services). It is optimised for virtual environments with integration with vCenter and vSphere. Hyperic provides open source IT resource and network monitoring application software. It auto discovers system resources such as operating systems, hardware, databases, middleware, applications, virtualisation and services. It provides monitoring of network services (SNMP, SMTP, HTTP, and ICMP) and host resources (processor load, disk usage and

osFY Magazine attractions during 2013-14

March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014

Virtualisation Open source Databases Network Monitoring Open Source application development Open Source on Windows Open Source Firewall and Network security Android Special Kernel Special Cloud Special Linux & Open Source Powered Data Storage Open Source for Web development and deployment Top 10 of Everything on Open Source

Featured List
Virtualisation Solution Providers Certification & Training Solution Providers Mobile Apps Cloud Web Hosting Providers E-mail Service Providers Gadgets IT Consultancy IT Hardware Network Storage Security IT Infrastructure

May 2013 | 65


Cloud Corner

Figure 1: Areas affected by cloud monitoring

system logs); supports remote monitoring; enables autodiscovery of system resources, identifies problems, performs root cause analysis, and offers security through access control.

open source, infrastructure monitoring system that enables organisations to diagnose IT infrastructure problems before they have an adverse effect on critical business processes. Nagios provides monitoring of Cloud Resources, such as compute, storage and network services. Nagios is capable of monitoring virtual servers and OSs in both the physical and virtual environment. With Nagios, it is easy to identify issues in the cloud environment, detect network outages and check application availability. The Nagios cloud monitoring tool offers multiple benefits such as high availability, fault tolerance and data availability. Nagios provides monitoring of public cloud services such as Amazon EC2 (Elastic Compute Cloud), Amazon S3 (Simple Storage Service), etc. Eucalyptus is a product for building private and hybrid clouds; its open source version does not provide builtin monitoring but that can be achieved with Nagios, using scripts in the Extras directory, along with third party tools, for example, to interact with Nagios. Nagios allows you to write plug-ins in just about any language and run them on remote servers.



Ganglia is a monitoring tool for high performance computing systems such as private clouds, public clouds, clusters, and grids. The Ganglia system contains: 1) two unique daemons, 2) a PHP-based Web front-end, and 3) other small programs. Gmond, a multi threaded daemon, runs on each node to monitor changes in the host state, announce applicable changes, listen to the state of all Ganglia nodes via a unicast or multicast channel based on installation, and respond to requests. At regular intervals, Ganglia Meta Daemon polls a collection of data sources, parses the XML, saves all metrics to round-robin databases and exports the aggregated XML. The Ganglia Web front-end is written in PHP and uses graphs generated by gmetad, and provides the collected information like CPU utilisation for the past day, week, month, or year. Multicast mode is the default setting in Ganglia installation and is the simplest to set up, providing redundancy. Public Cloud Environments such as Amazon's AWS EC2 do not support multicast, so unicast mode installation is the only set-up option available. Eucalyptus is an open source product for building AWS compatible private clouds; its open source version does not provide built-in monitoring but that can be achieved with Ganglia. The Eucalyptus source package includes scripts that can be used with third party tools such as Ganglia to enable Eucalyptusspecific monitoring on a pre-defined number of hosts.

OpenNMS is a network management platform which also has an open source model. It targets organisations that need scalable network management. OpenNMS supports SNMP natively, as well as common service checks. It is a new enterprise grade monitoring system. It employs a console that allows the OpenNMS daemon to communicate network status updates with the front-end engine in real time.


Zabbix is an open source network monitoring tool that can be used to automatically collect and parse data from monitored cloud resources so that administrators can verify the availability and see trends in network performance. It also provides distributed monitoring with centralised Web administration, a high level of performance and capacity, JMX monitoring, SLAs and ITIL KPI metrics on reporting, as well as agent-less monitoring. References
[1] [2] cloud-monitoring-keeps-open-source-in-cool-crowd/ [3] [4] [5] monitoring_systems


By: Mitesh Soni

The author is aTechnical Lead at iGATE. He is in the Cloud Services (Research and Innovation) Group and loves to write about new technologies. Blog:

Open source network monitoring and infrastructure monitoring tool Nagios provides monitoring and reporting for network services and host resources. Nagios Core is an
66 | May 2013

Monitoring Cloud Instances with Ganglia and sFlow

Let's Try

This article looks into some quick steps for installing, configuring and working with Ganglia and sFlow to monitor your cloud instances.

odays modern IT environment comprises a vast pool of heterogeneous infrastructure resources that are either on-premise or in the cloud. Monitoring such complex environments can be a real challenge owing to the fast-paced, ever increasing complexity of resources. A typical cloud-based data centre can have thousands of physical servers spread across multiple geographic locations, each running a hypervisor with one or more virtual machines on it. Each of these virtual machines further connects to a virtual or physical switch, storage arrays, etc. Real-time performance monitoring and visibility becomes quite essential in such diverse and complex environments. This article looks into the combined use of Ganglia and sFlow as a monitoring solution for a dynamic cloud environment.

the metric data, and an open source storage and visualisation tool called RRDtool. A combination of these makes Ganglia a truly concurrent and robust monitoring tool. The Ganglia monitoring system collects and processes metric data using two daemons or services, namely gmond and gmetad.

Gmond (Ganglia Monitoring Daemon)

Gmond is a simple daemon that runs on every host that has to be monitored within a cluster. It is designed to have very little overhead on the host it is monitoring. It is very easy to install and configure, supporting both Linux as well as Windows operating systems (as per its latest release v3.5.7). Gmond primarily monitors and announces the state change of the host to the gmetad daemon using XML over unicast or multicast channels.

Introducing Ganglia

Ganglia is an open source monitoring tool that was initially designed to monitor high performance computing systems such as grids and clusters. It is highly scalable by design and allows IT administrators to get a holistic view of the IT environments performance. Ganglia leverages multicast-based protocols to listen to and advertise the state of the machines within the cluster it is monitoring. It uses XML for data representation, the XDR (External Data Representation) format for transporting

Gmetad (Ganglia Metadata Daemon)

This daemon is primarily responsible for polling gmond daemons across a specific cluster, gathering the XML data, parsing it and saving the data in the round-robin database (RRD). This data can then be advertised to a client over a TCP socket. The gmetad daemon is designed to collect metric data from multiple gmond as well as other gmetad daemons. This type of scenario is best suited for cloud environments in which there are multiple clusters of servers spread across geographical regions. Each cluster can have at least one gmetad daemon that polls the
May 2013 | 67

Let's Try

Figure 1: Ganglia architecture

Figure 2: sFlow-Ganglia architecture

cluster state to a central gmetad daemon, which is responsible for data aggregation and presentation. Ganglia also comes with a unique PHP front-end that enables IT admins to get complete diagnostic and real-time information of the state of your clusters. Since Ganglia stores metric data over a period of time, the IT admin can now view historic data of the clusters and hosts ranging from the past hour, day, week, month, and even year.

Thus, IT admins now have the ability to monitor and gather metric data not only from the physical servers (using Ganglia), but also from the virtual machines and the applications that are running on them, thus providing a complete picture of how the clouds compute infrastructure is performing.

Setting up Ganglia and sFlow in the cloud

Introducing sFlow

While monitoring physical servers is a standard task even on a large scale, monitoring virtual infrastructure is much more challenging. A clouds virtual compute infrastructure can scale up and down dynamically depending on various factors such as CPU load, disk read writes, network IOs, etc. With such unpredictability, it becomes really difficult for a monitoring tool to gauge whether a virtual machine is actually experiencing problems or not. sFlow is an industry standard, network monitoring protocol that offers end-to-end monitoring for numerous devices and applications. It provides a standard set of metrics that can be collected from a wide variety of operating systems, virtualisation platforms, and physical devices such as switches, routers, etc. These standardised metrics are fully compatible with those offered by Ganglia, thus reducing complexity in configuration and management. Similar to Ganglias gmond daemon, sFlow provides a metric data collector in the form of the Host sFlow agent.

In this guide, we are going to set up a Ganglia and sFlow monitoring system that will be able to dynamically monitor and report the performance of cloud instances in real-time. These steps were performed on Amazon Web Services EC2 computer servers, but can be replicated identically on any major cloud services provider including Eucalyptus, Openstack, Cloudstack, etc. Figure 3 gives the set-up diagram. In this scenario, we are using two instances for demo purposes. One instance will contain the Ganglia gmetad and gmond daemons, and the other instance will contain only the Host sFlow agent. The Host sFlow agent will continuously monitor the instance and send back the collected metrics to Ganglias gmond daemon, which in turn, will feed the metrics to gmetad for processing and visualisation. In a full-fledged production environment on the cloud, there can be multiple such Host sFlow agents monitoring the CPU, network, disk workloads and reporting these back to a single or even multiple instances of Ganglia. The design can vary as per your requirement. Note: In this guide, I have compiled Ganglia from source as the sFlow integration capabilities are not yet supported on the Ganglia RPM binaries.

Host sFlow agent

This is an open source implementation of the sFlow standard protocol, and is used for physical and virtual server monitoring. The Host sFlow agents are also available for monitoring hypervisors, including Xen, KVM/Libvirt, Hyper-V and VMware ESX, and are supported on a variety of OSs as well, such as Windows, Linux, Solaris, etc. The Host sFlow agent provides scalable, multi-vendor, multi-OS performance monitoring with minimal impact on the systems being monitored.
68 | May 2013

Installing and configuring Ganglia

The first thing to do is to install some pre-requisite RPMs:

# yum install gcc gcc-c++ autoconf automake expat-devel libconfuse-devel rrdtool rrdtool-devel apr-devel libconfuse

Next, download and compile the latest version of PCRE (Perl Compatible Regular Expressions)

Let's Try
# # # # # cd ganglia-3.5.0/gmond cp gmond.init /etc/init.d/gmond /etc/init.d/gmond start chkconfig --add gmond chkconfig gmond on

Copy the init script and you are now ready to start the gmond daemon.

You also need to create a directory for RRD and apply a few permissions to it to make it Ganglia-ready.
Figure 3: Demo setup
# wget pcre-8.32.tar.gz # tar zxvf pcre-8.32.tar.gz # cd pcre-8.32 # ./configure # make # make install # mkdir -p /var/lib/ganglia/rrds/ # chown nobody:nobody /var/lib/ganglia/rrds/ -

Now copy the gmetad init script and open it in an editor of your choice.
# cd ganglia-3.5.0/gmetad # cp gmetad.init /etc/init.d/gmetad # vi /etc/init.d/gmetad -

Once these pre-requisites are met, you are ready to download and compile Ganglia.
# wget download?source=files # tar zxvf ganglia-3.5.0.tar.gz # cd ganglia-3.5.0 # ./configure --sysconfdir=/etc/ganglia/ --sbindir=/usr/sbin/ --with-gmetad --enable-static-build # make # make install

Comment the line starting with daemon and add the following line next to it as shown below:
# daemon $GMETAD ($GMETAD -c /etc/ganglia/gmetad.conf -d 1 > /dev/null 2>&1 ) & Start the gmetad process

You are now ready to start the gmetad daemon:

# /etc/init.d/gmetad start # chkconfig --add gmetad # chkconfig gmetad on

Once compiled correctly, generate the gmond config file, as follows:

# gmond --default_config > /etc/ganglia/gmond.conf

Open the gmond.conf configuration file in a file editor of your choice:

# vi /etc/ganglia/gmond.conf

To install the Ganglia Web interface, install the pre-requisites and then download the Web interface, untar it, move it to the document root of the Web server and, finally, compile it.
# yum install httpd php # wget # tar zxvf ganglia-web-3.5.7.tar.gz # mv ganglia-web-3.5.7 /var/www/html/ganglia # cd /var/www/html/ganglia # make install -

Provide the name of the cluster that you want to monitor. This will help you identify instances running under a particular cluster, e.g., production cluster, staging, testing, etc.
cluster { name = owner = unspecified latlong = unspecified url = unspecified } -

With this, you should have configured your first instance with Ganglia. To check whether all configurations were done correctly, simply launch a Web browser and type in the following:

May 2013 | 69

Let's Try

Figure 4: Ganglia Web UI

You should see the Ganglia Web interface displaying the metrics of the Ganglia instance itself.

After editing the configuration file, you will need to restart the Host sFlow agent:
# service hsflowd start

Installing the Host sFlow agent

With your primary Ganglia monitoring system set up, all you need to do now is install and configure the Host sFlow agent on the remaining cloud instances that you wish to monitor. Depending on the device you want to monitor, you can download the appropriate agent from projects/host-sflow/files/REL-1_22/ Since the instance we are using here is a simple Linux instance, lets use the standard Host sFlow agent RPM download. Once downloaded, run the RPM as shown:
# wget # rpm ivh hsflowd-1.22-1.x86_64.rpm

After the sFlow agent is started, uncomment the following entry in the gmond configuration file (/etc/gmond.conf) in your Ganglia monitoring system instance:
# vi /etc/gmond.conf /* sFlow channel */ udp_recv_channel { port = 6343 }

With the RPM installed, all you need to do is edit the sFlow configuration file and point the agent to the Ganglia gmond instance:
# vi /etc/hsflowd.conf # Add the flowing code in your hsflowd.conf file sflow{ DNSSD = off polling = 20 sampling = 512 collector{ ip = <your_Ganglia_server_IP_Address> udpport = 6343 } } 70 | May 2013

The Ganglia charts will now start displaying the performance metrics collected using the sFlow agent. Enabling sFlow monitoring on each instance in such environments can thus provide a simple, efficient and highly scalable solution for monitoring performance and workloads in clouds. References
[1] [2] [3] [4]

By: Yohan Wadia

The author is a senior software engineer and is a part of the Cloud Services team (Research and Innovation) at iGATE. An avid blogger and technologist, he loves exploring all emerging technologies and trends in the IT industry. Blog:
Let's Try


RTG (Multi Router Traffic Grapher), a tool for graphing numerical data, is free and has been written in Perl by Tobi Oetiker. Let's first check out how much fun installing MRTG can be. Your Linux distribution will probably include an MRTG package so you will not have any problems installing it. At the time of writing this article, the latest MRTG stable release is version 2.17.4.

Graphing Network Performance with

This article guides readers on how to use MRTG in order to display the ADSL traffic of a Cisco 877W router with a little help from SNMP (Simple Network Management Protocol).
The RO community string (OpenSourceFY) permits Get requests only, whereas a RW community string (that is required for this article) allows both Get and Set requests. Allowing Set requests without any reason and without using Cisco Access Lists (ACL) is a security threat. The following command, that produces plenty of output, will examine if the SNMP router set-up is working as expected:
$ snmpwalk -Os -c OpenSourceFY -v 1 cisco

Configuring SNMP on a Cisco 877W ADSL router

The first thing to do on the Cisco 877W ADSL router is to turn on SNMP. Then create a SNMP community string that will help you get the required information. The following commands should be executed on the Cisco router using IOS:
cisco877w#enable cisco877w#show snmp %SNMP agent not enabled cisco877w#configure terminal Enter configuration commands, one per line. End with CNTL/Z. cisco877w(config)#snmp-server community OpenSourceFY RO cisco877w(config)#exit cisco877w#write memory Building configuration... [OK] cisco877w#

The cfgmaker utility

The cfgmaker MRTG utility helps create configuration files. It may add unnecessary data that you should clean up later, but its output is a great starting point. I ran the cfgmaker utility as follows:
$ cfgmaker --global 'WorkDir: /Users/mtsouk/Sites/mrtg' OpenSourceFY@cisco > cisco-MRTG.cfg

The executed command creates the cisco-MRTG.cfg file inside the /Users/mtsouk/Sites/mrtg directory and queries a machine named cisco (I have a cisco entry in my /etc/hosts file) using the OpenSourceFY community string. After creating the configuration file, a Web page that will allow access to MRTGs output should be created. This task can be done using the indexmaker utility, which also comes with MRTG, as follows:

may 2013 | 71


$ indexmaker Cisco-MRTG.cfg > index.html

Let's Try

Now you can point your Web browser to the right URL, depending on where you put MRTGs index.html file, to view MRTGs information. No image file has been created yet, so the output will look unpleasant.

Setting up MRTG for the Cisco ADSL router

As the cisco-MRTG.cfg file created by the cfgmaker utility contains unwanted information, I had to clean it up. My final MRTG configuration file is given below:
# /opt/local/bin/cfgmaker --global "WorkDir: /Users/mtsouk/ Sites/mrtg" OpenSourceFY@cisco ### Global Defaults EnableIPv6: no WorkDir: /Users/mtsouk/Sites/mrtg ### Interface 5 >> Descr: 'Dot11Radio0' | Name: 'Do0' | Ip: '' | Eth: '00-1d-a2-e7-3f-b0' ### Target[cisco_5]: 5: OpenSourceFY@cisco: SetEnv[cisco_5]: MRTG_INT_IP="" MRTG_INT_ DESCR="Dot11Radio0" MaxBytes[cisco_5]: 6750000 Title[cisco_5]: Traffic Analysis for WiFi Connection PageTop[cisco_5]: <h1>Traffic Analysis for WiFi Connection -cisco877w.mtsouk.local</h1> <div id="sysdetails"> <table> <tr><td>System:</td> <td>cisco877w.mtsouk. local</td></tr> <tr><td>Maintainer:</td><td></td></tr> <tr><td>Description:</td> <td>Dot11Radio0 </ td></tr> <tr><td>ifType:</td> <td>Radio Spread Spectrum (802.11) (71)</td></tr> <tr><td>ifName:</td> <td>Do0</td></tr> <tr><td>Max Speed:</td><td>6750.0 kBytes/s</ td></tr> <tr><td>Ip:</td><td> ()</td></tr> </table> </div> ### Interface 15 >> Descr: 'Dialer1' | Name: 'Dialer 1' | Ip: '' | Eth: '' ### Target[Cisco-linespeed]: . OpenSourceFY@cisco: SetEnv[Cisco-linespeed]: MRTG_INT_IP="" MRTG_INT_ DESCR="Dialer1" MaxBytes[Cisco-linespeed]: 3145728 Title[Cisco-linespeed]: Traffic Analysis for ADSL Internet connection -- Cisco 877W Legend1[Cisco-linespeed]: Average 72 | may 2013

Legend2[Cisco-linespeed]: Legend3[Cisco-linespeed]: Maximum Legend4[Cisco-linespeed]: LegendI[Cisco-linespeed]: TX: LegendO[Cisco-linespeed]: RX: PageTop[Cisco-linespeed]: <h1>Traffic Analysis for ADSL Internet connection</h1> <div id="sysdetails"> <table> <tr><td>System:</td><td>Cisco 877W</td></tr> <tr><td>Maintainer:</td><td>Mihalis Tsoukalos</td></tr> <tr><td>Description:</td> <td>Ciscolinespeed</td></tr> <tr><td>ifType:</td> <td>ADSL connection</ td></tr> <tr><td>ifName:</td> <td>Dialer1</td></tr> <tr><td>Max Speed:</td><td>250.00 kBytes/s</ td></tr> </table> </div>

The aforementioned configuration file monitors my ADSL as well as my Wi-Fi traffic. The Wi-Fi connection was automatically found by the cfgmaker utility but setting up the ADSL was more complicated. The Cisco SNMP OIDs (Object Identifiers) that interest me are the following: returns the incoming traffic for the ADSL connection. returns the outgoing traffic for my ADSL connection.

Finding those two number sequences is a little difficult, so I will explain how I found them: I Googled IF MIB download and found the following URL: I knew that I was looking for In and Out Octets and by looking on the Web page, I found that I was interested in sequences (ifInOctets) and (ifOutOctets), respectively. The last thing to do was adding the desired Cisco interface number at the end of each sequence. The available interfaces can be found by executing the following Cisco IOS command:
cisco877w#show snmp mib ifmib ifindex ATM0: Ifindex = 6 ATM0-adsl: Ifindex = 12 ATM0-atm layer: Ifindex = 8 ATM0.0-atm subif: Ifindex = 9 FastEthernet0: Ifindex = 1 Null0: Ifindex = 7 Dialer1: Ifindex = 14 FastEthernet1: Ifindex = 2
Let's Try


Virtual-Access1: Ifindex = 15 Vlan1: Ifindex = 13 FastEthernet2: Ifindex = 3 FastEthernet3: Ifindex = 4 Dot11Radio0: Ifindex = 5 ATM0-aal5 layer: Ifindex = 10 ATM0.0-aal5 layer: Ifindex = 11 cisco877w#

In my case, the required interface index number was 14 (Dialer1). I made MRTG track those two values using the following line:
Target[Cisco-linespeed]: 0.14& OpenSourceFY@ cisco:

Running MRTG

Figure 1: The output from the MRTG index page

There are two ways of running MRTG, manually or as a cron job. The first way is useful for testing and debugging purposes. After going to the /Users/mtsouk/Sites/mrtg folder, I ran the following command:
$ mrtg cisco-MRTG.cfg

The first time you run it, you will see many error and warning messages due to the fact that all the MRTG output and data files are missing and MRTG has to build them. The next task is to set up MRTG to run as a cron job by adding the following crontab entry:
*/5 * * * */opt/local/bin/mrtg /Users/mtsouk/Sites/mrtg/cisco-MRTG.cfg

This cron command makes MRTG run every five minutes.

MRTG output

Figure 1 shows the output of the MRTG index page as created by the indexmaker utility with the initial cisco-MRTG.cfg file created by the cfgmaker utility. Figure 2 shows the detailed output that is displayed when you press on one of the images in Figure 1. There is also a Yearly Graph that is not shown. Web Links & Bibliography
[1] mrtg: [2] Cisco MIBs: mibs.shtml [3] SNMP RFCs:

By: Mihalis Tsoukalos

Mihalis Tsoukalos enjoys photography, UNIX administration, programming iOS devices and creating websites. You can reach him at or @mactsouk

Figure 2: Detailed MRTG output

may 2013 | 73

Open Gurus

Grub 2 Demystified A Complete Perspective

Let's Try

Grub, also known as GR and Unified Bootloader, has been the main boot loader for many Linux distributions. Grub was initially released as a part of the GNU HURD project and later was merged in the public repository of the GNU project. It is presently available in two versionsthe legacy Grub and the new version marketed as Grub 2, which has a number of interesting features and options.

Ubuntu, w Ubu ith Linux 2.6.38-8-g nux 2.6. ntu, with Li to rEFIt to ELILO Chainload

ic (recover y mode)




rub2 was derived from a project called PUPA. The projects aim was to enhance the GNU boot loader, make the code more secure and provide a robust platform for the end user. The magnitude of changes called for an immediate rewrite of the code from scratch. Grub2, with the completely rewritten code, boasts of a highly modular and secure structure offering a variety of expandable designs. While newly written code and a completely new design offer many advantages, they also have their own drawbacks. For starters, Grub2 tries to be simple and looks similar to the legacy Grub, providing users with a familiar feeling. The new model brings into play a completely new set-up wizard, which is too perplexing for newbies, while at the same time creating new opportunities for developers to get the most out of the modular design. Here are some of the key features that make Grub2 more sophisticated and easier for developers to maintain. Scripting support: Grub2 offers scripting support. Developers and users can create custom scripts to carry out specialised functions. Grub2 even allows you to make use of conditional statements.

Dynamic module loading: Boasting of a modular design, Grub2 offers a dynamic module, which means that each and every module can be loaded when needed, during selection time or after it. Custom menus and themes: Eye candy has never been a Grub feature but with Grub2 developers aim to provide ample leeway for designers to create gorgeous themes. Support for UUID: Grub2 has native support for UUID (Universal Unique Identifier), providing a more concrete solution to identifying partitions. A centralised system for rescue and setting up: Grub2 offers even more robust rescue measures in case of some problems. The above mentioned features provide only a glimpse of what Grub2 is really capable of, covering only some of the newly released features. Such robust features and a completely new approach have unfortunately brought along many drawbacks as well. The incorporation of a new set-up wizard is one. Users conversant with the older configuration wizard will have difficulties getting used to the new one.

74 | May 2013
Let's Try

Open Gurus

Note: It is not advisable to edit the grub.cfg file. There are multiple reasons for this, the most important one being if anything goes wrong, the chances of a boot failure are very high. Also, data in the grub.cfg file is dynamic and dependent on other config files. So it is advisable to edit the config files in the other directories upon which the grub.cfg files rely.

General configuration

Figure 1: Grub2 default welcome screen

Grub2: An in-depth look

Grub2 offers new configuration files and is not at all similar to the older version even in terms of how it looks. There are also major changes in the functionality of the new Grub. It displays the menu when the Shift key is held while, previously, the Esc key needed to be used if only a single Linux installation was found. The configuration panel has got refurbished in the new avatar. The older menu.lst file in the /boot/grub folder no longer exists and has been replaced by a new grub.cfg file. The new config files are dynamic and depend on many modules for data. As a result the grub.cfg gets rewritten after the system update. Once the file gets updated, all the changes will be lost. To make static changes, Grub2 has introduced a new user-defined file called 40_custom, which is located at /etc/grub.d director. This file is independent of the modules, and the changes made will remain untouched irrespective of changes made to the grub.cfg file. The noticeable change in Grub2 is perhaps a complete new nomenclature for the partitions. Grub2 now denotes the first partition as 1, unlike Grub, where the first partition was denoted by 0. However, the first hard disk device is still denoted by hd0, keeping the older device convention intact.

Accompanying new config files, Grub2 now offers a completely new command line interface. With new commands used to manage Grub2, things can be cumbersome at times. To make things easier for new users, I will try to cover the general Grub2 commands that will come handy. Updating Grub: The update-grub command lets you update Grub by rewriting grub.cfg file reading various system modules and configuration files. The command will ensure and enlist all the scripts and config files in the Grub2 directories. Switching menu entries: Since it is not recommended to edit the grub.cfg file, lets edit the requisite source file in order to make changes. To make a boot entry default, edit / etc/default/grub. The Grub file under the /etc/default directory has many useful customisation parameters for editing menus. To make an entry default, change the numerical value of the entry GRUB_DEFAULT=0 to whichever entry you wish to boot into automatically. Similarly, you can change the time-out seconds in the same file by editing the GRUB_ TIMEOUT=10 value. Recovering Grub 2: The most important aspect of configuring Grub is recovery. Most of the time, during the installation of some other operating system, Grub could get lost or replaced with some other boot loader. In that case, you have to re-install Grub. In order to do that, a bootable Linux medium with Grub2 as the main boot loader is required. Once booted into the live environment, open a terminal. As the root user, issue the following commands:
mount /dev/sdaXY /mnt grub-install root-directory=/mnt /dev/hdZ

Directory layout

/boot/grub: Contains most of the configuration files and modules that are needed for the system to boot. The centralised config file grub.cfg resides in this folder. /etc/grub.d: Contains most of the scripts and user defined files through which grub.cfg files are created. When the requisite commands are run, files and scripts within this folder are read to amend changes in the main configuration file, i.e., grub.cfg /etc/defaults/grub: Contains the config files for menu settings that are further implemented in the grub.cfg file.

where /dev/sdaXY is the partition in which /boot of the system is installed, and /dev/hdZ is the hard disk number in which the /(root) partition resides. Reinstalling Grub will restore and enlist all the entries and settings that you have previously applied. Once Grub is restored, the update-grub command will add newer operating system entries from the installed system. General customisation options: Grub2 offers many customisation options. Most of the set-up files needed to tweak the default look and feel are available under the /etc/grub.d directory.

Working with Grub2

Working with Grub2 can be daunting given the number of changes it carries. So you first need to get acquainted with the
May 2013 | 75

Open Gurus

newer terms and files needed for editing.

Let's Try

Note: The main configuration file grub.cfg is automatically generated; thus the changes made are temporary and tend to fade away with grub update. During Grub2 configuration the most important thing is to avoid making changes directly to the grub.cfg file. Adding custom entries: Even though the detection mechanism is robust and efficient, there may be a scenario when Grub fails to add an entry or you want to add your own entry. For adding a custom entry in Grub2, you have to modify the file /etc/grub.d/40_custom. The format in which the entry should be written is as follows: menuentry name: This is the name that is shown in the Grub2 bootloader for selection. set root=(hdx, y): Defines the partition from where the boot files are loaded and booting has progressed, where X denotes the disk number and Y the partition number. linux /vmlinuz-xyz: Defines the path to the Linux image; add this if you are adding a Linux entry. initrd /initramfs.img: Defines the ramfs for the system, which is only needed if the system depends on initrd for booting. It is not required for Windows systems. Here is what our /etc/grub.d/40_custom looks like after making a user defined Grub entry:
# (1) OSFY_Linux menuentry Open Source For You { set root=(hd0,2) linux /vmlinuz-linux root=/dev/sda2 ro nomodeset initrd /initramfs-linux26.img }

Figure 2: Grub2 bootloader editor

Figure 3: Grub2 KCm

linux (loop)/casper/vmlinuz boot=casper iso-scan/ filename=$isofile quiet noeject noprompt splash -initrd (loop)/casper/initrd.lz }

To append these changes in the main configuration file, run the grub update commands. Grub2 can be updated by using two commands. As root user, issue any of the following commands:
update-grub mkconfig-grub o /boot/grub/grub.cfg

Booting an ISO with Grub2: With the Grub2 custom menu and entry wizard, booting an ISO directly from Grub2 has become much easier. There is no need to burn an optical disk when you have a nifty bootloader at your disposal. Booting an ISO is very much like adding an entry, the only difference is that rather than pointing to a kernel, you point to the ISO for booting. To boot an Ubuntu ISO, append the following statement in the /etc/grub.d/40_custom file:
menuentry ubuntu-11.04-desktop-amd64.iso" { set isofile="/media/hd1/ubuntu-11.04-desktop-amd64.iso" loopback loop (hd0,1)$isofile 76 | May 2013

Change the path of the ISO and replace the (hd0,1) where 0 is the disk number and 1 is the partition number. Securing Grub2: Security was one of the key features underlying Grub2 development. Grub2 boasts of a highly secure system. To further aid security, you can define a password and allow users to boot into a specified OS. Grub2 is still in the active development stage, so settings can vary depending on how developers want to streamline things. Grub2 offers multiple solutions for displaying Grub entries. A user can have both protected and unprotected entries in the Grub2 boot loader menu. The Grub2 securing system takes a step forward by allowing you to select users and assign them different booting privileges. From a secure point of view Grub2 offers multiple users and Grub configuration files, making it more perplexing for users to tinker with the settings on their own.

To get started, let us initially set up users and passwords to configure security in Grub2. Open the /etc/grub.d/00_ header and add the following commands to set up security based on authentication:
cat << EOF set superusers=osfy password osfy l1nux EOF
Let's Try

Open Gurus

The aforementioned statement creates a super-user with the name osfy and assigns the password l1nux to it.
Figure 4: A customised Grub2 splash

Note: It is not necessary to have the same user name present in the current Linux system. You can have any ambiguous name to secure the Grub system, irrespective of its availability as a system user. To set up multiple users, you can simply assign multiple users and passwords with the password field:
cat << EOF set superusers=osfy password osfy ll1nux password linux ubUntU EOF

# (1) OSFY_Linux menuentry Open Source For You users osfy set root=(hd0,2) linux /vmlinuz-linux root=/dev/sda2 ro nomodeset initrd /initramfs-linux26.img

The password mentioned above is in plain text and can be read by anyone with administrator privileges. For further security enhancements and fortifications, Grub allows encryption of the password using an inbuilt tool, namely, grub-mkpasswd_pbkdf2. Running the command will bring up a prompt asking you to enter the password. Once entered, it will generate an encrypted string. To use the generated string, replace it with the password in front of the user of choice in the /etc/grub.d/00_header:
cat << EOF set superusers=osfy password osfy grub.pbkdf2.sha512.10000.9FF12B1401811430D664B6 2473FE2CE19FA6B04563FE908ED7CF0462 E60CB06BD94320662BBC4D0 16609A4F613B2C45BEAE2C0BBB23944D4CBA205C3EA78E093.9047CDC6FE8 52574B9AEFBACD1275700557066ADA7EE307C98765E04E6997056D748BF21 C369617B94BB2DD3C096FA541A5F4C19BDDDF4D731D3785A16409652 password linux ubUntU EOF

Simply append users <user_name> in front of menuentry and that segment will be secured. Make sure you enter a valid username; else it can result in boot failure. Customising Grub2: The next step in managing Grub2 is to customise it. Unfortunately, Grub2 is not as customisable as the legacy Grub. Intuitive and fancy configuration options like Gfx Grub are missing at the time of writing, but might reappear in the near future. Grub2 offers some nifty cosmetic changes out-of-the-box, such as changing the Grub menu background and changing its colours. To change the Grub background, copy a jpeg, png or tga format image in the /boot/grub folder. Once you have saved the file, open the configuration file /etc/defaults/grub with a text editor and add the following statement:

Replace test.png with the filename that you have saved. Append the changes to the main configuration file by updating Grub. Use one of the following commands as the super user:
update-grub grub-mkconfig o /boot/grub/grub.cfg

To change the menu colours, add the following command:


The string generated is too long and looks messy. Perhaps developers should fix the shortcomings of generating such long strings. Once you have got the user setting fixed, define which user should be able to boot or which entries should be secured. To configure Grub2 further, you will have to edit /etc/

To check the supported colour and gradients, please visit GNU Grub Theme Site: software/grub/manual/html_node/Theme-file-format. html#Theme-file-format
May 2013 | 77

Open Gurus

Third-party applications

Let's Try

A generalised modular architecture allows developers to release applications, making it easier for end users to edit and play around with Grub2. Currently, there are a number of applications available for editing Grub2. Here are three such applications. Grub2 boot loader editor: This is a KDE system add-on and graphics front-end for editing and customising Grub2. This software add-on lets you manage Grub2 entries, timeout seconds, manage default entries, and manage kernel parameters on-the-fly. Grub2 boot loader editor lets you create and manage Grub2 Splashes with easy to follow steps. The best part is that with just a single click, you can even restore Grub2. The Grub2 boot loader editor is one handy tool for managing Grub2. Grub2 KCM: Another KDE system settings add-on, Grub2 KCM offers a stripped down set-up wizard. Though not as fancy as the bootloader editor, Grub2 KCM allows you to manage and update basic functionality. Start-up manager: Touted to be the most widely used Grub manager, this set-up tool is a full-fledged GUI application unlike the above two, which acts as an add-on to KDE systems settings. Start-up managers offer a large number of general configuration options and have the ability to manage more than just Grub2. This software has specifications for managing the legacy Grub, Grub2, Splashy and Uslpash. Unfortunately, development for this software has come to a halt.

Grub2 comes with a new and robust technology, and provides users with a more sophisticated environment to work with. The modular approach and a more streamlined customisation system are added advantages. Though still under heavy development, users can get the sources and use it. With the halt in the legacy Grubs development, it is pretty clear that the future of the GNU bootloader is Grub2. But even though its ready for mainstream use, distro vendors have stuck to the legacy Grub. Right now, apart from Ubuntu and Fedora, Grub2 does not have many takers. However, Grub2 is a positive step forward. With efficiency comes complexity, and that seems to be the case with the Grub2 set-up. Grub2 will sooner or later become the mainstream boot loader, so if you want to try out the top-of-the-line and most secure booting environment, do give it a try. References
[1] Grub Home Page: [2] GNU Grub Theme Site: manual/html_node/Theme-file-format.html#Theme-file-format

By: Shashwat Pant

The author is a FOSS enthusiast, and likes to tweak his hardware for optimum performance. You can reach him on Twitter @ shashpant or visit his website at

You can mail us at You can send this form to The Editor - D-87/1, Okhla Industrial Area, Phase-1, New Delhi-20. Phone No. 011-26810601/02/03, Fax: 011-26817563

78 | May 2013

Open Gurus

This article discusses Slackware, a free and open source Linux-based operating system.

Simple, Straightforward and Stable

lackware GNU/Linux was created by Patrick Volkerding in 1993. It was originally a derivative of the Softlanding Linux System (SLS), one of the most popular of the original GNU/Linux distributions and the first to offer a comprehensive software collection that comprised more than just the kernel and basic utilities. In time, Slackware evolved into a separate system with clear goals and a development model, while its precursor, the SLS, lost the crowds interest and soon vanished. Over the years, Slackware has had many versions and changes but it still remains the most loyal to its UNIX-like system roots, among all the GNU/Linux distros. The will to preserve and support a tech utility or way of

doing things well known to the user base is probably the best thing about Slackware. It makes you learn less unnecessary new stuff and keeps the user in control. This simplicity is not matched by any other system in existence. Being a fairly new UNIX connoisseur, I was introduced to Slackware in its 14th avatar in the fall of 2012. I have missed the countless events and steps that Slackware has taken and the obstacles it has overcome over the years, to become the oldest still actively developed GNU/Linux distribution in existence. I missed the switch from X to xfree86, and the project's lead developer Patrick Volkerding's mysterious illness back in the early 2000s . Emotions apart, Slackware has a great tendency not to
May 2013 | 79

Open Gurus

adopt new unstable and potentially useless technologies, which are received with much appreciation by other communities and distros in the *nix world, e.g., Systemd, a fairly worthless upgrade from the existing system V-like init systems widely implemented currently in almost all distros. Slackware, however, hasn't yet addressed this issue officially but there has been general chatter on the IRC and mailing lists about it, and it's a unanimous belief among all users and developers that it will not switch to a worthless upgrade when it has already implemented a beautifully working, run level system. Slackware is not about being compatible with the rest of the world or having the most user-friendly workflow or ease of use. For that matter, Slackware is wrongly portrayed as an 'advanced user only' distro and that the community built around it consists of advanced users. Slackware, however, is supported primarily by the 'grand-dads' who are caring yet cranky enough to make you the future *nixers who will succeed them as the graybeards of Slackware. The installation media can be acquired from http:// I, however, preferred getting the DVD image for my architecture amd64, for testing the installation on all three of my trusted machines my ThinkPad x120e, ThinkPad x201s and MacBook Pro 2010. The installer for Slackware is based on ncurses, which gives the retro look to this distro and appeals to users because of the kind of simplicity it provides. The installer requires you to partition your disks with cfdisk or fdisk utilities, which are certainly more geeky and not as simple as gparted or an automatic partitioning tool shipped with most beginner distros. KDE is the default DE for Slackware, but xfce and other WMs are shipped with the DVD image too. Thus it is easy enough to skip installing anything. You can choose the exact packages you want for installation, individually, or you can choose from categories; even a Net install is possible but is not always the most feasible thing to do. The best thing about the DVD image is that it contains all the officially packaged software that Slackware provides from its core infrastructure. Frankly, it contains all the tools I need for my work flow, not to mention the pkg package managment toolkit, which is actually just a clever way of using make-install. This specific feature of Slackware makes it the best package there is for my personal needs. However, users who are not so comfortable with packaging and maintaining builds of software for themselves can rely on newly established services for Slackware users such as and slapt-get, which acts a lot like the Debian apt. In
80 | May 2013


my opinion, though, the use of such tools robs you of the advantage that Slackware provides you, which is a clean slate to build on and maintain. To quote Patrick Volkerding, Slackware is designed and developed in such a way that if you wanna compile/rebuild a part of it, it'll not fight/oppose you in the process and will gladly surrender to your changes. The Slackware philosophy is a lot like that of the BSD operating systems of todaythe automation of maintenance tasks to make the user's life easier while also making it simpler for them to tweak the system. And how can I forget the love the distro has for LILO, the long forgotten boot loader, which is the default on Slackware and which worked flawlessly on the MBR formatted disks on my ThinkPads but made me cry my eyes out on the GPT set-up on my MacBook Pro. Excerpts from my journal regarding my experience with Slackware.

Day 1

Downloaded Slackware current distro tree to build my own image for installation, using Pat's script for building a custom ISO image. Ran the following as the root:
isohybrid </path/to/the/.iso>

This command needs the package Syslinux, which is a modern boot loader found in almost all distro's repos and used for booting live images of distros like *buntus, Fedora and Manjaro. I was running Debian with Syslinux installed and I created a hybrid image for my own use. Thus I can vouch for Debian being able to do it. After this, I got a flash drive and dd this hybrid image on to it, and ran the following command as root:
dd if=/path/to/.iso of=/dev/sdb

Note: /dev/sdb is the device node of your flash drive. Check and verify the exact node your USB flash drive is on, with the following command:
fdisk -l

After booting from the USB on my ThinkPad x120e I saw a root prompt and ran:
mkdir adityainstallsslackware && mount /dev/sdb adityainstallsslackware

After this, the installation was quite straightforward with choosing the keyboard layout, the disk partitioning which I had to do manually with cfdisk, and then selecting partitions on which to install swap, / and home.

Then I had to select the packages to install from a radio button menu. I didn't install KDE since I am anti K/Qt. I chose xfce and not DM. I highly recommend xfce, which is at version 4.8 right now on Slackware. It works without a hiccup. The best part about Slackwares installer is that it is ncurses based, which makes me feel like I am running a real UNIX-like system and not a cheap clone!
Day 6

Open Gurus

boot my Mac, and had to use rEFIt to change the partition set-up from dos MBR to GPT.

Day 2

Installed a newer version of Gambas since I had an urge to hack something not so familiar. In my opinion, it is the best way in the world to install software. It worked out quite nicely. The dependencies are listed on the Gambas homepage. I first yanked them up from, then I installed an older version of Python since I needed to test some of my apps for Debian Squeeze, which still has Python 2.6 by default (and I can't expect all my users to build and install a newer version of Python on their systems).

Packaged my first piece of software in Slackwares native tl.gz format, which is nothing but a clever way of using GNUmake's build powers as an installation capability; subsequently, I read the Slack Book which has a new version in beta these days. Well, this version is much more evolved and it has much newer, more useful, relevant and contemporary documentation about Slackware than the current stable version of Slack Book. Stable book version:; Newer beta book version:

Day 3

Had to set up my HP deskjet printer. To my surprise, Slackware doesn't just come with the Hplip drivers; it comes with the whole HP printer suite with an official HP CUPS manager.

Day 7

Encountered my first dependency/philosophy problem. Turns out the version of Virtualbox that I acquired was virtualboxqt and since I am anti Qt/K, I had to re-download the right binaries for the GTK2 version of the Virtualbox UI.

Day 8

Day 4

I had to enable the network manager and the Bluetooth services daemons from /etc/rc.d. Created an IPtables set-up from Alien Bob's script generator http://www.slackware. com/~alien/efg/. Its as easy as pie and the script generated is adequate if you are running a desktop on which you don't SSH much. It's not quite effective for a high traffic server.

Day 5

Had to set up the Broadcom wireless card on my MacBook Pro (bcm 4322). Yanked BCMfwcutter and ran it. It pulled the firmware for my card using wget and it installed the firmware for the B43 driver in the Linux kernel. I just had to reboot and my Wi-Fi was working. In the eight days that I used Slackware as my main OS on all three machines, I must say I was amazed by its simplicity and straightforwardness. Nothing comes close to it. I now have two distros that I can rely uponDebian and Slackware. If I were to choose one (every Debian purist vein in my body is screaming as I say this) I would choose Slackware for its simplicity :) References
[1];; www. (the official Slackware forums community)

Finally, set up Slackware on the ThinkPads and then it was time to rip a little hair from my head. I now had to set it up on my MacBook Pro. One thing I'd like to mention is the difference between EFI and UEFI. UEFI systems have a PROM, where the firmware is installed and can't be replaced/overwritten easily. It may brick your system if the EFI support on your UEFI system is buggy; however, Apple EFI, which is an Apple specific version of (U)EFI doesn't follow standards set by the industry. The firmware on Macintosh systems is primarily stored on the HD itself, which makes it almost immune to bricking. The way to reset the PROM is as easy as holding down cmd + opt + P + R, powering on the machine and waiting for three subsequent start-up chimes, which will reset the firmware on your Mac to its factory state (this also works if you forget the EFI password for your pre-2011 Mac. I know this from experience; however I am not sure about the post 2011 Macs since I don't have any). Anyway, I had to replace the stock LILO from LILO-EFI64 to be able to

[1]; questions/slackware-installation-40/

Special thanks:
I would like to thank Patrick Volkerding and the whole Slackware crew for answering every query I threw at them via email, and for patiently helping me resolve the countless issues I faced while breaking the packaging process because of my own foolishness.

By: Aditya Pareek

The Jaipur-based author is a UNIX hobbyist and programmer. He can be reached at or at the #crunchbang-offtopic channel. He blogs at

May 2013 | 81


Secure Your SCADA Network Using Honeypot

Supervisory Control and Data Acquisition (SCADA) is an integrated part of a process control network. By actually damaging some critical infrastructure assets, including a nuclear plant and launch of a satellite, the Stuxnet virus proved the need for process control network security. Having woken up to this new threat, people are coming up with various strategies to mitigate such attacks.Various tools and techniques are being deployed to enhance the security posture of SCADA installations, one of the most important being honeypots and honeynets.

Let's Try

rocess control and automation systems are the lifelines for critical infrastructure like air traffic control systems, nuclear plants, satellite launch systems, electricity generation, water supplies, oil and gas refineries, and so on. Any disruption to these systems may result in catastrophic risks including loss of human life. Till recently, most of the networking products in the critical infrastructure area were perceived to be in the safe environment. Protocols used for their communication were proprietary and these networks were usually physically isolated from the IT networks. With new requirements like the access to real-time data, the possibility of inter-communication between products from disparate vendors, connectivity with ERP systems and of course, cost-effectiveness, the standard protocols such as Ethernet and TCP/IP are being adapted to a large extent in process networks. They are also being connected to IT networks, and the Ethernet is now being used as a backbone to connect various devices and run the day to day manufacturing processes. But along with the benefits like ease of use and ease of connecting, combining IT and the process control networks has resulted in added risk factors the latter are now exposed to all the risks associated with the IT network.

A typical process control network (PCN) is categorised by four levels, starting at Level 0. Let us try to understand these levels with an example of temperature control. A temperature sensor (thermometer Level 0) in the boiler will send the current value of the waters temperature to the controller. Depending upon the desired target temperature, the temperature controller (Level 1) will switch the heater on or off. In a typical factory, there will be many such controllers connected to a centralised (supervisory) control (Level 2) to ensure synchronisation between various processes. Advanced controllers (Level 3) will be used to optimise the processes. These may include historians (which maintain history of process parameters) or optimisation controllers. Here, Level 0 signals are typically analogue in nature, and Level 1 to Level 3 can use the Ethernet for connectivity. The business network that is not part of the PCN is considered as Level 4, and care is taken to control access between these two networks only on a need basis. Supervisory Control and Data Acquisition (SCADA), at Level 2, is one of the most important parts of the PCN. It is used to centrally monitor and record various process parameters. Here, the processes may be running at one physical location and SCADA may be located at entirely different locations. As per the requirement, WAN or LAN links are used for interconnection between them.

82 | May 2013
Let's Try
SCADA Master Station/Control Center Comm. Links 1200 bps + (down to 300 bps in actual installations) Remote Substation Intelligent Electronic Devices


Possible Honeypot Placement


Radio Microwave Spread-spectrum



External Control Points
Twisted-pair Fiber-optics Dial-up Leased line

Remote Terminal Unit (RTU)


Protected Network

Programmable Logic Controller (PLC)

Level 2

Level 1

Level 0

Figure 1: A typical process control network

Figure 2: Honeypot placement

Wikipedia defines a honeypot as a trap set to detect, deflect, or in some manner counteract attempts at unauthorised use of information systems. Generally, it consists of a computer, data or a network site that appears to be part of a network, but is actually isolated and monitored, and which seems to contain information or a resource of value to attackers. Thus, an attacker may attack a SCADA honeypot perceiving it to be a true SCADA system. Multiple honeypots configured to mimic various devices or operating systems is a honeynet. Depending upon the requirement, honeypots and honeynets can be deployed at any of the following locations: Directly accessible from the Web In a de-militarised zone where access is allowed from the Internet as well as from the protected internal network On the internal network Honeypots and honeynets help to ensure security in various ways: They divert the attackers attention to an easy target rather than the actual system. Log the attackers activities for further analysis to gain in-depth knowledge about the attack and to develop prevention techniques. Provide forensic information, which is required by law enforcement agencies to establish that an attack occurred.

Honeypots and honeynets

that creates virtual hosts on a network. These hosts can be configured to run arbitrary services, and their personality can be adapted so that they appear to be running certain operating systems. Honeyd enables a single host to claim multiple addresses (tested up to 65536) on a LAN for network simulation. Honeyd improves cyber security by providing mechanisms for threat detection and assessment. It also deters adversaries by hiding real systems in the middle of virtual systems. The Honeyd configuration file defines how the configured honeypot will respond to various types of requests such as ICMP Ping, requests on UDP ports, TCP SYN, etc, thus, in a way, defining the status of various ports and services. This reply is interpreted by the scanning tool as a system running a corresponding service.

The basics of nmap port scanning

Let us understand the process of port scanning that is used by the network scanning tool, nmap. A typical SYN scan of nmap sends a SYN packet to the destination IP address on the port number to be scanned. There are three possible replies from the destination IP as shown in Table 1.
Table 1: nmap port status

No 1 2 3

Response SYN ACK RST No response

Port status Open Closed Filtered

Explanation Service running on the port No service running on the port Could not determine port status

Characteristics of honeypots and honeynets

They look genuine, exactly like the system they mimic an attacker should not be able to make out that they are modified systems. Allow controlled traffic towards the Internetan attacker should not be able to use the honeypot as a stepping stone for further attacks on the Internet. May contain dummy information, for example a SCADA honeypot may contain a Web page resembling the genuine SCADA system. This will attract the attackers and keep them engaged, ultimately resulting in more time and attack techniques being used on this system.


The simplest way to install honeyd under Ubuntu 12.0.4 is to use the following command:
sudo apt-get install honeyd

Honeyd: An open source honeypot

As defined by, Honeyd is a small daemon

Honeyd is installed in /usr/share/honeyd. Once installed, it can be configured to mimic various operating systems which appear to run with various services. First, let us understand
May 2013 | 83


Let's Try

Semiconductor to the honeypot. The other commands are selfexplanatory. To start the honeypot configured in winxp.conf under daemon mode, use the following command:
sudo honeyd d f winxp.conf

Figure 3: nmap scan result for winxp.conf honeypot

Using the daemon mode will enable you to see all the network requests and corresponding responses on the screen of the honeypot system. Figures 3 and 4 show the results of scanning Windows honeypot by nmap. The important points to consider in the screenshots in these figures are: The process is being demoted from root level IP address is received from the DHCP server The establishment of ARP binding to Intels MAC address Various ARP responses UDP port 17500, which is used by Dropbox for local file synchronisation. There is at least one system using Dropbox in this network. The honeypot responds to it by closing the connection, since UDP port requests are blocked.

Figure 4: Scada honeypot nmap scan

how honeyd can be configured to mimic Windows XP SP1.

Configuring honeyd to mimic Windows XP SP1

Create the configuration for the Windows XP honeypot in the winxp.conf file as follows:
set winxp personality "Microsoft Windows XP Professional SP1" set winxp default tcp action block set winxp default udp action block set winxp default icmp action reset set winxp uptime 1234567 add winxp tcp port 135 open add winxp tcp port 139 open add winxp tcp port 445 open set winxp ethernet "intel" dhcp winxp on eth0 # To configure static IP on eth0, comment the dhcp command and enable bind command as follows: # bind ipaddress winxp

As mentioned on the website http://scadahoneynet., the SCADA honeynet project was launched with the aim of determining the feasibility of building a software-based framework to simulate a variety of industrial networks such as SCADA, DCS, and PLC architectures. It can be used to: Build a honeynet for attackers, in order to gather data on attacker trends and tools Provide a scriptable industrial protocol simulator to test a real, live protocol implementation Research countermeasures, such as device hardening, stack obfuscation, reducing application information, and the effectiveness of network access controls The project dates way back to 2005 but it is very relevant even today in the challenging SCADA security scenario.

The SCADA honeypot

SCADA honeypot installation

Download the latest release of the SCADA honeynet project and expand the tgz to get four Python scripts. The names indicate services emulated by the corresponding scripts:

Explanation of important configuration options: Remember that the first three bytes of the MAC address denote the manufacturers ID number. The command:
set winxp ethernet "intel"

Place these files in the /usr/share/honeyd/plc folder. Also make sure you have installed Python on your Ubuntu box.

configures MAC address belonging to Intel

84 | May 2013


Create configuration for the SCADA honeypot in scada.conf

file as follows:
Let's Try


set scada default tcp action block set scada default udp action block set scada default icmp action open set scada maxfds 35 set scada uptime 23456787 add scada tcp port 21 "python plc/" add scada tcp port 23 "python plc/" add scada tcp port 502 "python plc/" add scada tcp port 80 "python plc/" set scada ethernet "00:11:22:33:44:55" bind scada

Figure 5: PLC home page

Table 2 gives an explanation of important configuration options. A typical SCADA system uses FTP, Telnet, HTTP and MODBUS services running on TCP ports 21, 23, 80 and 502 respectively. These are assigned to the corresponding Python scripts in the configuration file.
Table 2

Command bind scada add scada tcp port 21 "python plc/"

Explanation Used to bind scada honeypot to static IP address Probes towards port 21 (ftp) of the honeypot defined by scada. conf are responded by script

Figure 6: Protocols supported

Similarly, three other Python scripts define responses for port 23 (telnet), port 80 (http) and port 502 (MODBUS) To start the honeypot configured in scada.conf under daemon mode, use the following command:
sudo honeyd d f scada.conf

analysemake sure to allow write permission to this file for the user running honeyd. The SCADA honeynet project satisfies the basic requirements of a honeypot: Appearing as part of a network, though actually isolated All access logs are saved for further study Its Web interface contains a page that an attacker could perceive to be of great value

Word of caution

Testing the SCADA honeypot

Using nmap for scanning: nmap n reveals only three open ports: 21, 23 and 80. By default, nmap scans for 1000 well-known ports listed in the nmap-services file. This file does not include port 502 used by the MODBUS protocol. To scan all TCP ports, use the following command:
sudo nmap -p1-65535 -n

Various issues related to the legality of honeypots and honeynets have already been discussed search the Web for more details. Please make sure to evaluate and understand a particular honeypot by testing it in a lab environment. Do not forget to understand the legal consequences before deploying it in live environments. As an example, if an attacker uses a honeypot to further launch attacks on third party systems, the liability may lie with the honeypot owner. References
[1] Honeyd: [2] SCADA Honeynet project: http://scadahoneynet. [3 ]Honeypots: are they legal? connect/articles/honeypots-are-they-illegal

After detecting FTP, Telnet and HTTP ports open; try to use the respective clients to access content from these ports. Port 80 the browser: Open the honeypot IP on any Web browser to see the PLC Web page with Diagnostics, Statistics and Protocols Supported menus. Port 23 Telnet: Telnet to the honeypot IP and establish a connection.

By: Rajesh M Devdhar

The author is a BE in electronics. He is also a CISA, CISSP, CCNA and has been working as an IS auditor and network security consultant for the past two decades. He can be contacted at

Checking logs on the honeypot

All the traffic received on the scada.conf interface is logged in the /var/log/scadahoneynet.log file, which you can study and

May 2013 | 85

For U & Me

Let's Try

86 | May 2013

Let's Try For U & Me

Just keep typing. The text gets typeset automatically. Leaving a blank line starts a new para. A double back slash forces a blank line. \subsection{Subsection Title} %Content \subsubsection{Subsubsection Title} %Content \end{document}

With the above template as a starting point for experimentation, and with a little help from the links in the References section, you are well on your way to learning as much LATEX as you need. Lets try to learn some basic constructs in LATEX in the rest of this article.

Quick recipes

Specialised LATEX functionality is bundled within socalled packages. The default installation, however, gives you standard typesetting sufficient for most purposes.


Enumerated lists can be typeset as follows:

\begin{enumerate} \item Item 1 \item Item 2 \item etc. \end{enumerate}

and would appear as: 1. Item 1 2. Item 2 3. etc. You can explore other kinds of lists in LATEX by yourself. Replacing enumerate by itemise gives you a bulleted list.


The following is an example of a table insertion. It is fairly intuitive. Note how the left and right columns are respectively right- and left-justified for the purpose of illustration.
FOSS Technology Description Apache Web Server Blender 3D-Modeller LibreOffice Office Suite
Table 1: Example of a table

May 2013 | 87

For U & Me

Let's Try

[1] is a useful and handy cheat sheet. [2] is the place for LATEX packages. [3] offers a guide to LATEX. You can download the source code of this article from

By: Gurudutt Talgery

The author likes exploring the contribution of FOSS in improving the productivity of knowledge workers and also writing about it.

88 | May 2013

For U & Me

are on the rise

Linux jobs

If you are working in the open source space, you have all the chances of being hired by the biggies. Linux jobs are on the rise, claims Ralf Flaxa, vice president, Engineering, SUSE. And he has his reasons for making this claim. In an exclusive conversation with Diksha P Gupta from Open Source For You, Flaxa explained where and how Linux professionals can seek job opportunities and how SUSE goes about its hiring. Excerpts:
Ralf Flaxa, vice president, Engineering, SUSE

may 2013 | 89

For U & Me

You repeatedly say that Linux jobs are on the rise, globally. What gives you the confidence to say that?


As the engineering vice president of SUSE, I have been hiring a lot of people in the last few years. If you look at our website, we constantly have 15-35 engineering job openings for SUSE, depending on our approvals and requirement. So, SUSE is growing and so is its engineering team. We are doing well as a company, but I think Linux, in general, is doing great and is witnessing tremendous growth. A lot of traditional companies that used proprietary operating systems for their own development work have now taken serious cognisance of Linux. After adoption of Linux, obviously they need some expertise in-house, which is where you see the demand for Linux professionals. There are times when companies realise that they need expertise and they contact companies like us. We are seeing a general trend towards open source and Linux. A number of companies that have never used it before are now considering it, and are contacting us as an enterprise partner to help them with that. So, in that context, we are growing with these partners and hiring engineers. Also, there are some technology areas that are particularly hot for Linux, like the areas around the cloud. Everything is new in this space, so there is no legacy. It has not been done with a different operating system earlier, so now that it is starting out, it is beginning with Linux. Another big area is mobile computing on smartphones, tablets and netbooks that run Android and Linux. This is why I say that Linux jobs are on the rise.

or tablet space is a feasible option only when you have enormous volumes. It may sound strange but have a look at the companies with operating systems how many of them are really making money out of the operating systems is the real question. Google is giving Android away for free. Besides, I cannot see a platform which is anyways a success, be it in terms of popularity or in terms of making money. Hype doesnt mean money!

OpenSUSE 12.3: Sheenless but feature-rich

Lets talk about your latest release OpenSUSE 12.3. It comes with a lot of improvements in terms of features but doesnt have the cosmetic changes. What stops you from doing the cosmetic enhancements to the product?
I think its a question of how much effort can be put into every SUSE release vis-a-vis how much effort you can put into an enterprise release. The OpenSUSE products come out every eight months, so they have to be churned out very quickly and they are available to the users for free. There is cost but no revenue involved in it. So, I do not have the capacity to make OpenSUSE very great, but I need my engineers to make our enterprise very nice and stable, and ensure that everything works nicely. Contrary to our competitors like Ubuntu, who see the consumer market as their business model, SUSE sees the enterprise market as its business model. We absolutely leverage the open source development model and we contribute a lot back. There are quite a few areas where we are ahead of our competitors and we are investing in those areas. But when it comes to things like artwork and themes, it takes a lot of time and investment, which we would like to put into our enterprise version of the operating system. At the end of the day, the investment we put in the free products is certainly limited compared to the enterprise products, for which the customers pay.

You just mentioned that the smartphone and tablet space is getting hotter. What is holding back SUSE from entering this space?
Frankly, business in this space is difficult. Everybody likes this space but the margins there are very, very small. We have had a lot of requests from people wanting to work with us but given the fact that there is very tough competition in these segments, margins are reduced to a minimum. Its actually hard to make money in that space. The mobile and tablet space will definitely continue to grow, Linux and Android adoption will also continue but I am not sure if we would like to get into something that has very slim profit margins to offer. We dont have any active plans to enter this space anytime soon, but of course, if we are able to cut a big contract, we would be willing to try it out. But so far, the business cases that have been presented to us have not been interesting enough. There have been big OEMs who came to us, but we were not impressed by the deal we were offered. So, its not about big OEM partners, its about the business case we get. Just because everyone is doing it, we will not do it. We are a professional business company and so we look at the business case. We make upfront investments only if we see long term returns. Getting into the smartphone
90 | may 2013

What are the highlights of OpenSUSE 12.3, according to you?

First of all, it is going to be the base of our next enterprise product. So, a lot of technology seen in OpenSUSE 12.3 will become the standard not only for the SUSE Linux enterprise version but also for all major Linux distros. Supporting UEFI and secure boot are its biggest highlights. Its a big problem today. If you take other distros and try to install them, theres a lot of work that is required for a successful installation. We have tested our software. We purchased the hardware and we tried different distros, including Ubuntu and Fedora, apart from OpenSUSE. In many cases, OpenSUSE was the only one that was installed without any problems. So we are definitely ahead of others with OpenSUSE 12.3. With this version, we are looking ahead to future technologies. Of course, we have some issues to be resolved because it is a pretty new technology.

The highlights here are for people who want to prepare for what is coming in the next couple of months and years. In these areas I think OpenSUSE is pretty much ahead on the curve. We also have support for KDE and GNOME, so for users who want a choice between KDE and GNOME, OpenSUSE is the perfect option. OpenSUSE 12.3 is the best you can currently get in terms of booting the dual boot software in the hardware.

For U & Me

Was it a deliberate move to choose the Linux 3.7 kernel in OpenSUSE 12.3?

Thats the question of stabilisation. There is a certain point in time when we freeze the kernel version for release, and then we port fixes and features of Linux kernel 3.8. But the version number is 3.7++. Again, if you want to have driver support for hardware, almost everything in Kernel 3.8 is on 3.7. But in terms of the dependencies of the built, we freeze the kernel at a point in time and then we just add patches from the newer version. So, my answer is: yes, its called Linux 3.7, but if you look under the cover, you will find that it is almost Linux 3.8.

support, whether it was for the graphic cards where you had to use SUSE to do your work or the secure boot machines. The members of my staff each have between 10-20 years of Linux experience and they are all very active in the upstream community. So look at any popular open source project like GNOME, KDE or the Linux Kernel, and you will find SUSE people working there. All the engineers at SUSE have two roles. One side of them is very business oriented, while the other side is very community oriented. So, we never forget that the success of Linux is not because of SUSE or Red Hat, its because of the community. We contribute heavily to the community and we will, forever, continue to do that.

What percentage of SUSE employees interact with the community on a regular basis?

The new avatar of SUSE

How has SUSE changed after being acquired by the Attachmate Groupin terms of the policies related with the software?
There have not been many changes in SUSE engineering. All our engineering processes have worked very well before as well, and so we continue to keep them. I think what has really changed is the unleashing of the SUSE brand. Now you see more green everywhere as well as the new branding. We have been able to be more innovative again. Under Novell, we were more limited as far as innovation was concerned. For example, on the enterprise products, we were the first ones to release an enterprise grade 3.0 kernel in the SUSE 11 service pack 2. We released the Kernel 3.0 with our enterprise product. And we just did not release it, but we first made it compatible to the product certifications. So, users could upgrade from service pack 1 to service pack 2. Red Hat, which is comparatively much bigger and more popular, still doesnt have the 3.0 kernel out. I think these are the areas in which we are again able to be more innovative.

Some of the people in SUSE Labs devote most of their time to community initiatives. They help in building the Linux kernel, something that we at SUSE also need, but they all do upstream work in their productisation phase. While there are other people in the company, like in QA, who spend comparatively less time upstream. But they watch mailing lists and submit back reports. Its hard to give a general answer but like I mentioned earlier, every SUSE employee has two sidesbusiness and community. The mix depends on the role they perform. Interaction with the community is almost mandatory for all SUSE employees. Its actually a criterion for hiring. So our first question to an engineer is, What have you done in terms of open source? Have you contributed to any community; do you know about open source developments? And so on. These are the key questions in our hiring process.

How much of the contributions to the OpenSUSE project come from India?

SUSE and the open source community

What role has SUSE played in adding value to the growth of Linux and open source?

SUSE just had its 20th anniversary last year. I think we are one of the grandfathers of Linux. We were one of the first to create a distribution to make Linux consumable. We have the first enterprise product with the Linux enterprise server. So in terms of productising Linux, we have definitely been a key player. SUSE has also been good in terms of hardware

The Indian community has traditionally been very active around GNOME and Evolution. In the bigger picture, many majors like Intel and IBM have a couple of luminaries in India, and a handful of them can be seen at the Linux Kernel summit. Though the number of contributors from India is in no way proportional to the countrys population, there are some really good contributions coming from the country. But you see more contributions coming from North America and Europe. I think it is still the early stages for open source in India and it is not so compatible with the culture. The open source communities can be really harsh. In the Indian or the Chinese culture, if you get flamed publicly on a mailing list because your patch wasnt perfect, it gets intimidating to a lot of people. While, in the western culture, being open and critical is not seen as a personal insult. Instead, it is seen as a method of making things better. The tone on these open source communities can be very rough, which is hindering the contributors from both India and China.
may 2013 | 91

For U & Me

Mathematics Made Easy With Minimal Octave

This fifth article in the mathematical journey through open source introduces Octave, a nonprogrammers way of doing mathematics.

Let's Try

ctave sounds like a musical note, but in reality it is the name of its author's Chemical Engineering professor, who was known for his back-of-theenvelope calculations'.

All of bench calculator, and more!

suppresses the result (return value) to be displayed. Comments could start with # or %. For block comments, #{ ... }# or %{ ... }% can be used. Detailed help on a particular topic can be obtained using 'help <topic>' and the complete documentation can be accessed using 'doc' on the octave shell.
$ octave -qf octave:1> # This is a comment octave:1> pi # Built-in constant ans = 3.1416 octave:2> e # Built-in constant again ans = 2.7183 octave:3> i # Same as j the imaginary number ans = 0 + 1i octave:4> x = 3^4 + 2*30; octave:5> x x = 141 octave:6> y error: `y' undefined near line 6 column 1 octave:6> doc # Complete doc; Press 'q' to come back octave:7> help plot # Help on plot octave:8> A = [1 2 3; 4 5 6; 7 8 9] # 3x3 matrix A = 1 4 7 2 5 8 3 6 9

All the operations of the bench calculator (bc) are just a subset of octave. So, there's no point in reinventing the wheel. Whatever operations can be done in bcarithmetical, logical, relational and conditionalcan also be done with equal ease in octave. And like bc, it can be used as a mathematical programming language. So, why did we waste time learning bc? We might as well have come directly to octave. Well, theres one differenceprecision. You can't get that in octave. If you need it, you would have to go back to bc. With octave, lets start with N-dimensions, or in other words, vectors and matrices.

Getting started

Commands: Typing 'octave' on the shell brings up the octave's interactive shell. Type 'quit' or Control-D to exit. The interactive shell starts with a welcome message. As in bc, use the option -q for it not to show. Additionally, use the -f option for it not to pick up any local start-up scripts (if any). So, for our examples, the command would be 'octave -qf' to start octave with an interactive shell. Prompts and results: The input prompt is typically denoted by 'octave:X>', where X is just a number showing the command count. Valid results are typically shown with '<variable> = ', or 'ans = ' or '=> '. The command count is incremented for the next input. Errors cause error messages to be printed and then octave brings back the input prompt without incrementing the command count. Putting a semicolon (;) at the end of a statement
92 | May 2013

octave:9> quit

Matrices with a heart

We have already seen matrix creation. They can also be created through multiple lines. Octave continues waiting for

further inputs by just prompting '>'. Check below for how to create the 3x3 magic square matrix, followed by various other interesting operations:
$ octave -qf octave:1> M = [ # 3x3 magic square > 8 1 6 > 3 5 7 > 4 9 2 > ] M = 8 3 4 1 5 9 6 7 2

Let's Try For U & Me

Figure 1 1



7+03089, 0+754870

octave:2> B = rand(3, 4); # 3x4 matrix w/ randoms in [0 1] octave:3> B B = 0.068885 0.652617 0.043852 0.885998 0.904360 0.579838 0.542059 0.036035 0.709194 0.797678 0.737404 0.053118

Figure 1: Plot of our 30 phase shifted sine

20 15 10 5 0 0 5 10 15 5

octave:4> B' # Transpose of B ans = 0.068885 0.885998 0.542059 0.797678 0.652617 0.904360 0.036035 0.737404 0.043852 0.579838 0.709194 0.053118
20+4314, 14+7377







octave:5> A = inv(M) # Inverse of M A = 0.147222 -0.144444 0.063889 -0.061111 0.022222 0.105556 -0.019444 0.188889 -0.102778 octave:6> M * A # Should be identity, at least approx. ans = 1.00000 -0.00000 0.00000 0.00000 -0.00000 1.00000 0.00000 0.00000 1.00000

Figure 2: The heart

octave:10> polar(x, 10 * (1 - sin(x)), 'm*') # bonus Heart octave:11> quit

Figure 1 shows the plot window that pops up in response to command # 9 plot and Figure 2 is the bonus magenta heart of stars (*) from polar coordinates draw command # 10 polar.

What next?

octave:7> function rv = psine(x) # Our phase shifted sine > > rv = sin(x + pi / 6); > > endfunction octave:8> x = linspace(0, 2*pi, 400); # 400 pts from 0 to 2*pi octave:9> plot(x, psine(x)) # Our function's plot

With a ready-to-go level of introduction to octave, we are all set to explore it the fun way. What fun, you may well ask. That's left to your imagination. And as we move on, you could take up one or more fun challenge(s) and try to solve them using octave.

By: Anil Kumar Pugalia

The author is a gold medallist from the Indian Institute of Science. A hobbyist in open source hardware and software, he also has a passion for mathematics. He can be reached at

May 2013 | 93

For U & Me

Boost Your Career as a Database Administrator


The demand for adept database administrators will continue to grow in the years to come, opening up innumerable opportunities. A career in this domain comes with both challenges and excellent growth potential.

t a time when Big Data is the buzzword, organisations are looking for novel ways to leverage their colossal data to yield some robust business benefits. Not surprisingly, this has led to immense demand for database administrators (DBAs) to manage the varied databases. Before getting into the nitty-gritty of making a career in database administration, its important to have a fundamental understanding of the role of a DBA. A database administrator looks into the installation, backing up, testing and security of the production databases within an IT environment. A DBA also plays an important role in undertaking disaster recovery, performance tuning, instance cloning, or doing a basic back-up routine for the database. Industry leaders say that the job opportunities in this sector are tremendous. As Sai Phanindra T, corporate trainer of SQL School, Hyderabad, puts it, DBA jobs are evergreen as the actual need for a database administrator comes into play once a given project goes live on the production environment. Though

DBAs are required to assist the project development and testing teams during the life cycle of a project, the most critical aspect of the project implementation, i.e., the maintenance of a project, needs the database administrator.

The job of a DBA in an enterprise calls for spontaneity and pro-activeness, as the administrator has to maintain back-ups and pull through data to ensure its accessibility at all times. Vinay Pandey, technical director, Oracle, Koenig Solutions, New Delhi, shares, This job is vital for mission-critical systems like banks and financial institutions, where one cannot afford any loss of data. A DBA has to devise policies to maintain backups of databases so that there is no hindrance in the functioning of an organisation, in case of any casualty. Here are some of the roles that a DBA performs for an organisation: Devising a scalable, steadfast and flexible database that will help in enhancing the growth of the organisation. Doing away with the bottlenecks that hamper database

How important are DBAs for an organisation?

94 | may 2013
Name of Training Institute Contact Details

For U & Me

Koenig Solutions Ltd

Koenig Campus, B-39, Plot No. 70, KLJ Complex-1, Shivaji Marg, Moti Nagar, New Delhi - 110015 Ph: 09910710143 Email:, Website: 501, Om Complex, 5th Floor, Naya Bans, Sec-15, Noida-201301 Ph: 0120-4280181/9911640097 Email: Website: 1030, 3rd Floor, 27 Main, Opp. Megamart, Jayanagar 9th Block, Bengaluru Ph: 080 41100370 Website: I302, Sai Manor Towers, X Roads, SR Nagar, Hyderabad Ph: 040 64577244 Email: Website:

Tech Altum

Astrid Solutions

SQL School

transactions and ensuring that the database remains swift and receptive. Playing the role of the watchdog and ensuring that the organisations data is safeguarded. This means the DBAs have to chart out strong security policies involving the data as well as users accessing that data. Vinay Pandey also believes that since organisations are shifting to ERP and CRM applications, there has been a rise in the demand for DBA professionals.

Of pre-requisites and skillsets

A basic knowledge in software development languages is the foremost requirement for a DBA, as platforms like SQL Server are used in organisations. Poorti Srivastava, centre head, Astrid Solutions, Bengaluru, says, Knowledge of software languages makes the job of DBAs easy to a large extent. Apart from being familiar with core database concepts, they should have logical abilities and should also be aware of all the features available in the database tool that they are using. The DBA should have the ability to interact with the database through query languages like SQL and PL/ SQL. With the rapid rise of data consumption, the DBA should have the skill to support more data. The capability to foresee and implement the required database changes to accommodate the organisations growth is a must-have skill of the DBA, feels Srivastava.

the employability of a candidate in the recruitment market, believes Isha Malhotra, director, Tech Altum, Noida. Certifications demonstrate your commitment and seriousness towards your domain, and they do increase the employability of a candidate. However, the skills go beyond certifications as recruiters are more interested to know what candidates can do, rather than their certifications. So, its important to arm yourself with the right skillsets, says Malhotra. The available certifications for MySQL are Oracle Certified Associate, MySQL 5, Oracle Certified Professional, MySQL 5 Developer, Oracle Certified Professional, MySQL 5 Database Administrator, Oracle Certified Expert, and MySQL 5.1 Cluster Database Administrator.

Some handy tips to get hired

If you wish to explore this interesting field as a career option, here are some tips: Work hard and smart Practice and then implement Keep yourself upgraded with every new release and version Know the current market trends.

By Priyanka Sarkar
The author is a member of the editorial team. She loves to weave in and out the little nuances of life and scribble her thoughts and experiences in her personal blog.

Importance of certifications

Certifications in DB administration certainly boost

may 2013 | 95

For U & Me

For modern day tablets and smartphones, Android has become a default
to take over PCs. They will soon become the biggest medium of content consumption. So as an OEM, to survive in this competitive market, we have to bring out products that are compelling. I think we are doing that successfully, with our range of Xpad devices. In fact, we have launched three tablets in April.

Open Strategy

Can you tell us more about your recently launched tablet PCs?

Indrajit Sabharwal, managing director, Simmtronics Semiconductors Ltd

Simmtronics is becoming one of the most frequently mentioned names in the world of smartphones and tablets. The Indian manufacturer has launched a series of Android devices in the past few months because of the growing popularity of the Android platform and the firms capability to offer cut-throat competition to the other OEMs. Diksha P Gupta from Open Source For You spoke to Indrajit Sabharwal, managing director, Simmtronics Semiconductors Ltd, about the companys product line up and strategy. Read on...

Simmtronics seems to be banking big on the tablets market, despite having a presence in many other segments. What are your views on the tablet market segment in the country and how important is it for your business?
Simmtronics is the only tablet OEM in India, which is making an Indian tablet in the true sense. We are manufacturing our tablet devices in our facilities in India. So, what we are offering is a truly Indian high quality product, unlike many other market players who import their devices from China. Besides, the tablet PC segment is where we will continue to see tremendous growth. Tablets are set
96 | may 2013

Sure. We have launched two tablets called XPAD Simm-X722 and XPAD Simm-X802. The former offers a 17.7-cm (7-inch) capacitive touchscreen, multi touch G+G with 2G and 3G support. It comes with a SIM slot and is packed with 512 RAM DDR3 and 4GB internal memory. The tablet comes packed with the Android 4.0 (Ice Cream Sandwich) operating system, an A8 chipset, a 1.0GHz processor and a 3500mAh battery. Its USP is that it supports 2G calling and sports a sleek body, which makes it easier for users to carry around. The XPAD Simm-X802 offers both 2G and 3G support and is a single SIM tablet. It sports a 20.3-cm (8-inch) capacitive touchscreen with a 800 X 600 resolution. The tablet comes packed with 512 MB RAM along with 8 GB internal memory expandable up to 32 GB. It runs Google's Android 4.0 (ICS) operating system and has an A10 Cortex A8 chipset. The XPAD Simm-X802 comes with a multi core 1.2GHz processor. The tablet comes with pre-loaded apps, including a Web browser, a super-HD player, music, a calculator, Gmail, calendar, play store, a clock, Youtube, a camera, a PDF file viewer, Documents To Go, a sound recorder, Skype, Gtalk, Latitude, Maps, Messaging, People, Phone, one browser, Stick cricket and Bigflix. It has a 4000 mAh battery. Apart from these, we have introduced a quad-core tablet PC as well with the Android Jelly Bean operating system, which is being offered at a pretty affordable price. The tablet is the first-of-its-kind in the world. It is capable of playing four videos simultaneouslyyes, users can view four videos on one screen, all at the same time. We also have plans to foray into the smartphone segment with our products XPAD Smartphone X1 and XPAD Smartphone X2. While XPAD Smartphone X1 will run Androids Ice Cream Sandwich operating system, XPAD Smartphone X2 will run on the Android Jelly Bean operating system. These are single core and dual core smartphones, respectively, and we are looking at bringing out a quad-core smartphone as well.

For U & Me

Simmtronics has launched a 25.4-cm (10-inch) tablet the XPAD X-1010in the Indian market. Not many players have ventured into this screen-size segment. What prompted you to explore this space?

Open Strategy

We are a tablet manufacturing company. So, our focus has been to bring out offerings across the board. Launching a 25.4-cm (10-inch) tablet is a part of that strategy. Also, I feel that consumers are looking at tablets as their source of entertainment while on the go. Watching videos and playing games is a totally different experience on a tablet of this size. We, being a technology company, work to have an edge over others with our innovations and technology. A 25.4-cm tablet is one such offering. It has been accepted well across the country.

" We want to target the age group of 15-28 years with our tablet devices. We design our products keeping the needs and aspirations of only this target group in mind. We have been working through our distributor and channel network for the sale of our devices."
specifications, with our tablet devices, the customers get free Wi-Fi access in 5000 outlets across the country including McDonalds, CocoBerry, Cafe Coffee Day, etc. I can claim that the advantages that we are offering, no tablet company has offered before. So we clearly have the upper hand. Simmtronics was recently ranked as No 3 amongst tablet manufacturers globally. The speed at which we can deliver new technologies and customise a product as per market requirements is our USP and makes us different from any of the Indian players, at least.

What is your strategy for selling these tablet devices?

We want to target the age group of 15-28 years with our tablet devices. We design our products keeping the needs and aspirations of only this target group in mind. We have been working through our distributor and channel network for the sale of our devices. We have recently joined hands with HCL for the distribution of our devices. I am sure this is going to boost the presence and availability of Simmtronics devices tremendously, particularly in the regions we were not able to reach till now. Also, we are looking to enter into tie-ups with Large Format Retail (LFR) chains.


What about after sales service for your products?

We have around 500 service centres across India, so after sales service is not an issue for our products.

It is now time for phablets. What is your take on this growing segment?

Apart from manufacturing your own products, do you manufacture products for other OEMs as well?

Yes, we manufacture products for about seven different OEMs in India including Micromax and Sahara. Apart from these, we manufacture for three international OEMs as well.

One major criticism of Simmtronics tablets is that they look pretty similar to Micromax's offerings. How would you respond to that?
Well, I don't think we are similar to Micromax's offerings in any way. We are much better than them in many ways. In addition to a great package of features and

The phablet is the combination of a smartphone and a tablet, starting from a size of 12.7-cm (5-inch). If someone thinks that 'phablets' can replace tablets, they are mistaken. The experience is totally different. A smartphone, irrespective of its screen-size, cannot cater to the requirements that a tablet can meet. Tablets have a different set of applications and offer an entirely different experience. You may be able to watch a video on a smartphone but it is certainly not the ideal device to watch a movie on. This is where tablets play a role. We like to call our 12.7-cm devices as smartphones and not phablets.


So you don't buy Samsung's concept of phablets?

No, I am not too convinced with this logic that there can be a hybrid of smartphones and tablets. Both the devices have their own features and experiences to offer.

Any reasons for choosing Android as a platform for all your devices and not Windows 8?

Yes. First, Windows 8 has its own limitations. For modern day tablets and smartphones, Android has become the default. Android has a couple of open source alternatives already. Ubuntu for smartphones and tablets looks attractive. We will be watching out for developments with respect to Ubuntu but will take a final call on how the eco-system shapes up. As far as Windows 8 is concerned, we do not have any immediate plans to work with the software.

98 | may 2013

For U & Me

Skava is Looking to Hire Smart Programmers

Recruitment Trends

It's not just about knowing how to code. These days, it's also about smart programming. Young companies like Skava Inc are looking to hire smart engineers, who can write good code, despite the current turbulence in the global economic situation. Diksha P Gupta from Open Source For You spoke to Arish Ali, CEO and co-founder, Skava Inc, about the firms hiring plans in India. Read on...
centre of excellence here, which can house up to 400 people. We are a development company and we develop mobile commerce software. To be precise, we develop platforms for mobile commerce, shopping on mobile devices, etc. The centre will serve as a hub for Skavas product development, services and customer support. We already have about 180 people working on developing our software. We have a couple of sales offices outside India as well. The new Coimbatore facility has the capacity for 400 people. So we will be hiring additional talent in the next two years.

Jobs for smart engineers!

Arish Ali, CEO and co-founder, Skava Inc

t was in 2003 that a firm called Skava Inc came into being. Two enterprising brains came together to build something that was not only futuristic but also very businessoriented. They started developing mobile applications that year. Skava built the first retail site that was fully optimised for tablets, for Staples, the second largest online retailer in America. Currently, the company is working with multiple retailers, developing their in-store technology solutions. Skava has recently announced the opening of its new development centre in Coimbatore. Located in the Tidel Park, the 2787 sq m (30,000 square foot) facility can house up to 400 employees. The centre will serve as a hub for Skavas product development, services and customer support. Skava works to provide mobile and omnicommerce solutions to many of the top tier retailers in the US, and is a known name in this space. Speaking about the facility, Arish Ali, chief executive officer and co-founder, Skava Inc, said, We have launched this new state-of-the-art development centre at the Tidel Park, Coimbatore. We have been in this city right from when we started out. This new facility allows us to build our

Though Skava's new facility is looking to hire engineers, not all engineers will fit the bill. Ali explains, Our target is to hire smart engineers. By smart engineers, I mean people who can write good code, people who have a problem solving attitude and can be instrumental in bringing out great products. The technologies that we use are some of the most current and cutting edge. We develop technologies for all mobile operating systems. Smartphones and tablet devices are the target devices our software runs on. We provide really scalable technology at the back-end. Our customers are among the biggest retailers in the US, who use our mobile commerce solutions. They are happy with our solutions because they are able to handle the volume of users through our technology, on a day to day basis. So, that's where the quality of the product that we build is important and, hence, the workforce we hire plays a role. Choosing Coimbatore as the place to develop this facility was a conscious and meticulous decision. He shares the importance of the city saying, Coimbatore has tremendous talent and infrastructure to offer. The educational institutions at Coimbatore have been a fertile pool of outstanding engineers for us. That is why Coimbatore has always been at the heart of our growth strategy and the opening of this state-of-the-art new centre shows our commitment to the city and its people.

Open source technology plays a major role

The company uses a lot of open source technology, but chooses it with a lot of care. Ali asserts, We do use open source technology, but we exercise enough caution and choose it very meticulously for our products. Some of the

100 | may 2013

technologies we use include Apache Web servers and JQuery. Our core platform, the development of which is done by our team, is based on Java. There are some open source libraries that we use frequently, like Jquery. So, do open source developers fit the bill better? Well, apparently not! Here's what Ali has to say, We essentially look for good developers, who have good coding knowledge. Frankly, the discussion has gone way beyond open source or non-open source. If developers are skilled, they can be trained to work on any platform. So, it is the coding skills that make a good developer. Of course, if they have knowledge about the technologies that we use, it is an advantage.
Recruitment Trends

For U & Me

Finding the right talent is a challenge

India is rich in talent, which is why Ali chose to expand the development wing in the country. He takes pride in saying, India has got the best software talent out there. I am based in Silicon Valley. There are many companies I work with personally and I find that Indian talent has an upper edge. Our new facility is based on the confidence that we will be able to find the required talent. I can proudly say that we, as a company, have grown on the basis of software that was made by people in India. Though India has no dearth of software talent, surely there is a dearth of developers with exceptional coding skills? Ali agrees, saying, It's hard to find developers with

superb coding skills. We are always looking out for good programmers, developers and engineers. Good coding skills is definitely not a common thing. It really doesn't matter where the developers come from and what kind of technological background they have had in the past; what really matters is the way they write code and their programming skills. Skava believes that the present availability of good talent is courtesy the education provided in our engineering colleges, but there are changes that are required to make things better. Ali explains, People are good with their theory, but they are not strong in executing the theory. So, to improve the scenario, we plan to interact with a lot of colleges and let the HoDs know what kind of skills are really in demand in the market, in order to make the engineers employable. We keep on communicating with the colleges and we are planning a formal outreach plan with institutions. This includes holding coding contests, going to the campuses and telling them about the latest technologies. We want to make sure that the students also realise what coding skills they require if they want to continue working in the software industry for a longer period of time. For instance, if they are learning programming in COBOL, it may not be very useful for them, but learning iOS and Android programming will be of great help in the future as well, because these platforms are here to stay.

may 2013 | 101

For U & Me

We are where we are because of open source technology

Though open source technology has become commonplace now, there are companies hesitant about adopting it. The open source journey of Medma Infomatix Pvt Ltd should be an inspiration to them. Though Medma chose open source technology to get out of a mess' created by proprietary technology, the former has become fundamental to the companys business model now, proving to be a good source of revenue.


recent report claims that open source technology is eating' the software world. The revelation was made in a survey conducted jointly by Black Duck Software along with North Bridge Venture Partners. With the growing adoption of Linux and open source technology, one cannot deny that it is becoming the norm today. Not just the companies based in Tier I cities, but the ones in Tier II and Tier III towns are also showing keen interest in open source technologies, in their bid to make it big in the software world. One such firm is Medma Infomatix Pvt Ltd, which is based in Lucknow. Medma Infomatix Pvt Ltd provides custom IT solutions to SMBs across the globe. Its technology solutions are made using open source tools and technology. The company started its operations in 2005 with just six to seven people and, today, it has grown manifold. It now has three offices in Lucknow and one in Ahmedabad, with a total employee strength of over 85 people. The owners have also incorporated their wholly owned subsidiary in US, which will start operations by the end of this year. Speaking about the technologies used by the company, Harsh Jaiswal, founder of Medma Infomatix Pvt Ltd, said, We use a lot of open source technologies including PHP, Ruby on Rails, AIR, Magento, OpenERP, Android, MySQL, SQLite, etc. We have been using these technologies to provide content management systems, e-commerce websites, ERP/CRM solutions, document management, cloud applications, mobile games/apps, Facebook applications, et al.
Harsh Jaiswal, founder of Medma Infomatix Pvt Ltd

102 | May 2013

Open source: The way to go

OpenBiz For U & Me

Tips for open source tech businesses As there are many new technologies and options available now, they should start by focusing on any one particular technology. Also, before applying or using it for their clients, they must first use it themselves. This is what we have done and grown our business. Instead of selling or promoting open source technologies, businesses need to sell solutions which work for their clients. So, the focus should be on the solution rather than on the technology. The solution has to be developed after analysing their clients' specific business requirements.

Choosing open source technology over proprietary solutions was a decision that was made on the basis of the experiences of Jaiswal and his team. He recalls his first experience with open source technology, Actually, we started with Debian in 2008 after a virus attack that infected almost all the PCs, even caused a network failure and failed FTP connections to client's websites. Since we were running on proprietary operating systems, these virus attacks were quite common and we faced frequent downtime and, hence, were losing a lot of money in maintaining our infrastructure. But once we started using Linux as our OS, all problems relating to virus infections and security vanished. That was a turning point for us. We now do not look at proprietary alternatives because open source is affordable, scalable, secure and robust. We are where we are because of open source technology, adds Jaiswal. Jaiswals firm not only uses open source technology in building solutions but also recommends it to clients and counterparts. He says, The beauty of using open source technology is that your data is not locked in and so you are free to try any other technology or tool if you want to. So, we have been updating our infrastructure regularly and have been embracing better tools/software to enhance our operations. Similarly, it is beneficial for our clients because they (we) can scale their solutions anytime. The journey was not as easy as it sounds now. Convincing clients about using not so popular technologies' was quite a task for Medma's team. Jaiswal recalls the early days. Initially, we had to educate clients about open source technologies and then convince them. There were times when we gave them free solutions to try out our open source options. But now the scene is quite different. Clients are very much aware of the benefits of using open source technologies and are demanding that their products/solutions be developed by using open source alternatives. Vendor lock-ins are not a worry any more. People want complete control over their solutions, and what better option than open source technology to solve this problem, he says. Using open source technology not only brings financial benefits but other advantages as well. Jaiswal explains, Most of the time, open source technology is free to use so you can try it out without risking your money. But this is not the only benefit that one gets. I feel open source technology is much more secure, scalable, robust and along with wonderful technology, you get a community of other users who are there to help you out. This is not the case with proprietary solutions, where users are self-centred and it is difficult to find communities to help other users.

The open source technology business in Tier II cities

Choosing a Tier II city was a deliberate move from Medma's side. Jaiswal says, I wanted to do something in Lucknow as its my home town. At that time, there was no IT scene in Lucknow. There were only a handful of companies, who were engaged in catering to only small local clients. We wanted our company to cater to the overseas markets but Lucknow was not on the world IT map, so we had to educate our clients a lot. The decision came with its own challenges, the main one being scouting for talent. Lucknow doesn't have a scarcity of talent but it definitely has a dearth of experienced open source professionals and there is a general lack of awareness about the latest technologies. Jaiswal says, The scarcity of experienced resources in open source technology is one of the biggest challenges. As most of the educational institutes are still teaching proprietary technologies, it becomes difficult to find readily-employable resources in the market. Also, there are very few specialised training centres teaching students about these technologies, and if they are, then they are offering network and OS training, which are not required in the present scenario. We need to have centres that train youngsters in the technologies that are being used for developing modern day IT solutions. Since it does not get the desired talent, training the right kind of people on open source technologies is the solution Medma has resorted to. Jaiswal explains, We hire talented freshers from colleges and institutes, and then train them on the required technologies. Most of the time whatever they have learned in their colleges is of no use to us, so we have to start from scratch and train them on different technologies as per our requirements. Since these local resources are keen to join companies in other metro cities, we offer them a handsome stipend to join us and get this training. By: Diksha P Gupta
The author is assistant editor at EFY.

May 2013 | 103

AppStudioz Di Webtech Divum E-Zest Solutions Intelgain Kellton Tech Solutions NectarBits Rapidsoft Technologies trivialApps YandroidApps AppStudioz Di Webtech Divum E-Zest Solutions Intelgain Kellton Tech Solutions NectarBits Rapidsoft Technologies trivialApps AppStudioz YandroidApps Di Webtech Divum E-Zest Solutions Intelgain Kellton Tech Solutions NectarBits Rapidsoft Technologies trivialApps YandroidApps AppStudioz Di Webtech Divum E-Zest Solutions Intelgain Kellton Tech Solutions NectarBits Rapidsoft Technologies trivialApps YandroidApps AppStudioz Di Webtech Divum E-Zest Solutions Intelgain Kellton Tech Solutions NectarBits Rapidsoft Technologies trivialApps YandroidApps

A List Of Mobile (Android) Apps Development Providers

AppStudioz | Noida
With the successful launch of more than 700 applications, AppStudioz has a proven track record of meeting stringent deadlines and delivering cost-effective solutions that do not compromise on performance. AppStudiozs expertise has been demonstrated in several out-of-the-box augmented reality solutions, as well as a range of apps for education, audio/video, location, 2D and 3D games, healthcare and social networking.

DI Webtech | Gurgaon
DI Webtech is a leading Android apps development company that has been catering to its customers since 2004. DI uses the open source Android platform to build custom-based apps. As Android is an open source software stack for mobile devices, backed by Google, DI develops business and customised apps based on this open platform so that industry players can't restrict or control the innovation. The company plays the role of a technology and business advisor to its clients and recommends the Android platform for mobile apps, as it provides the best solution in terms of usability, ease of long-term maintenance and ROI.

Divum | Bengaluru
Divum has designed and developed over 100 highly interactive Android applications for its global clientele. Its vast experience and strong domain expertise will ensure its clients business needs are met. The company has found it rewarding to harness the true potential of Google Play for its visibility and reach, which covers over 200 million active Android users.


E-Zest Solutions | Pune

E-Zest specialises in working with the Android development kit, making use of the Java Programming language and running in a virtual machine or on a custom Linux kernel. Android is a multi-tasking and multi-threaded environment, so developers have significant control over things like an applications appearance and ultimate capabilities. E-Zest has a big pool of Android specialists who can start working on client projects, on-demand, instantly. These developers are competent in Android development and work closely with clients at every stage.

Intelgain | Mumbai
Intelgain is a technology solutions company that provides enterprise mobility solutions and software product engineering services around open source and Microsoft platforms for multiple business verticals. The company has delivered several mobile apps on Android as well as for the iPhone/iPad, enabling greater value addition to businesses worldwide, thereby resulting in greater ROI for their investments.

Kellton Tech Solutions | Gurgaon

The company is a global IT organisation offering customised IT services and solutions for the mobility, Web, ERP , security and the cloud domain. Kellton Techs Android Studio strives to deliver Android-based mobile apps with novel functionality and unmatched performance, all packed within an artistic framework. The company's competence and expertise in all areas of Android application development, right from conceptualisation, UI and creative development to quality assurance, maintenance and support, helps it provide its customers with wonderful apps that utilise the full strength of the Android platform.

104 | may 2013
Special mention: Kellton Tech has delivered customised Android-based mobile applications for a wide variety of industry segments including retail, e-commerce, media and publishing, education, lifestyle and market research. Kellton considers itself one of the pioneers in the use of novel technologies like location-based services, augmented reality and geo tagging in the field of Android app development. Did you know? Kelltons app for one of Indias largest organisations in the e-learning segment is the first of its kind to offer online smart classrooms on Android-based tablets. It has also developed Android apps for one of Indias best and most-used travel portals and Indias biggest movie exhibition chain. Website:

AppStudioz Di Webtech Divum E-Zest Solutions Intelgain Kellton Tech Solutions NectarBits Rapidsoft Technologies trivialApps YandroidApps

AppStudioz Di Webtech Divum E-Zest Solutions Intelgain Kellton Tech Solutions NectarBits Rapidsoft Technologies trivialApps AppStudioz YandroidApps Di Webtech Divum E-Zest Solutions Intelgain Kellton Tech Solutions NectarBits Rapidsoft Technologies trivialApps YandroidApps AppStudioz Di Webtech Divum E-Zest Solutions Intelgain Kellton Tech Solutions NectarBits Rapidsoft Technologies trivialApps YandroidApps AppStudioz Di Webtech Divum E-Zest Solutions Intelgain Kellton Tech Solutions NectarBits Rapidsoft Technologies trivialApps YandroidApps

NectarBits | Ahmedabad
The company is well-equipped with proficient developers who keep themselves updated on current trends when it comes to Android apps development. The team makes customised designs, while looking into factors like development, analysis and quality checking.

Openxcell Technolabs | Ahmedabad

The company has developed innovative Android apps that are being used in different fields such as business, travel, finance, sports, entertainment, health and fitness, to name a few. It develops apps for the new Android Jelly Bean OS as well. Openxcell is able to provide cost-effective Android application development services due to its strategic offshore location and fast app development processes. The company provides apps on business and finance, entertainment, interactive education, 2D and 3D Android games, health, fitness and medical, social networking and more.

Rapidsoft Technologies | Gurgaon

The company's Android application developers have hands-on experience in developing mobile and tab apps, which involves working on both easy and complex solutions. Apart from developing native solutions, the company works on unified and cross-platform applications as well. It is one of the leading Android development companies in India, with dedicated developers who have a thorough knowledge of conceiving and concluding Android mobile software development projects.

trivialApps | Lucknow
The company's core competencies are in the areas of responsive apps design, user experience evaluation, custom apps development, apps for Web services, apps integration with third party services, and apps security. It offers a range of app design and development services, from tactical engagements to full project lifecycle solutions that enable clients to expand the reach and range of their systems.

YandroidApps | Ahmedabad
The company's Android development team's proficiency lies in designing, developing and customising Android applications for every purpose, ranging from Android consulting for application design, performance enhancement, security, custom programming of solutions for Android mobile apps, re-engineering and porting of Android applications, periodic Android application maintenance and upgrades, third party library building, mobile business software development, etc.

may 2013 | 105





Normally, you should set your tab space value to four spaces while doing programming. By default, this value will be eight spaces. Here are the steps to customise the tab space value in your favourite Vim editor:
$ sudo vim /etc/vim/vimrc

Setting the tab in Vim

just press the ALT key and see your favourite shutdown option in the system menu. Karthikeyan,

Then add the following line:

set tabstop=4

Next, save, exit and execute the script, as follows:

$ bash

By using lsof, you can recover a deleted file that was opened already. This comes very handy when an attacker gains access to the system and has executed commands or has done some configuration changes, and then removes the log file(s) to erase evidence. A sysadmin can use this method to recover the file which has been opened by some processes to check what all the hacker has changed. The lsof - list open file is the command used for this:
lsof | grep syslog ( List processes which has this file opened ) rsyslogd 990 root 1w 141400 1237857 /var/log/syslog REG 8,3

Recovering a recently opened deleted file

Now when you check, your tab space will be set to 4 in Vim. Dibyendu Roy,

There are various ways by which you can disable a USB mass storage device. Here is how you can disable a USB device by editing grub.conf or menu.lst. All you need to do is open the file in a text editor and append nousb as one of the kernel parameters:
kernel /vmlinuz-2.6.18-128.1.1.el5 ro root=LABEL=/ console=tty0 console=ttyS1,19200n8 nousb

Disabling USB in Linux

Here the process 990 (PID) has opened the file /var/ log/syslog with file descriptor as 1 (1w) To recover the content of the file, just run the following command:
cat /proc/990/fd/1 >

You will have the content of the file stored in Lakshmanan Ganapathy,

Save and close the file and reboot the system to make the changes effective:
# reboot

Ajinkya Jiman,

If you could not find the Shutdown option on your PC that is running GNOME 3 as the desktop environment,
108 | may 2013

Missing the shutdown option?

Here is a tip that uses the sticky bit feature of Linux to secure a directory. A sticky bit ensures that no user other than the owner and the super user can delete files in the directory. When a sticky bit is applied to a directory, other users can not delete anything, even if they have full permission on the directory. To apply the sticky bit feature to a directory, run the following command:

Securing a directory

#chmod o+t mydir
#fdisk -l

This feature is extremely useful for group projects where multiple users are using the content from one single directory. Tulika Narang,

Check the name of your pen drive by issuing the following command:

Here is a simple tip to change the default ports of ftp and ssh. To change the port of ssh, edit the file /etc/ssh/sshd_ config and add the following line:
Port 222

Changing port of ftp and ssh

This will show the pen drives name and size, e.g., /dev/sdb Copy a file name diskboot.img for redhat installation dvd to root of the USB disk. 3. Take a back-up of the data on the pen drive, and then execute the following command: 2.
#dd if=/root/diskboot.img of=/dev/sdb


In the BIOS, set the first Boot device as USB Device. Suresh Jagtap,

Restart the service of ssh to make the changes effective. Now you need to specify the port number using option -p whenever you want to access this system.
#ssh localhost -p 222

In order to change the port of ftp server, edit the file / etc/vsftpd/vsftpd.conf
listen_port=2100 ##add this line

Here is how to get greeted with a welcome message every time you log in. Create a shell script in your home directory as
$touch $chmod 777 $cat

Log in message

Restart the service of ftp. Now run the following command to check if the listening ports have been changed or not:
ftp localhost 2100

Then add the following text to

sleep 10 fortune -s > record read a < record notify-send $a

You can even check the open port using the netstat or nmap commands. Sumit Chauhan,

Creating a desktop shortcut in Ubuntu


Here is a command to create a desktop shortcut for any

Now add to your start-up applications and enjoy fortunes every time you log in. The fortune command in Linux prints a random epigram. Sachin Selokar,

ln -s /home/test ~/Desktop/abc

This will create a shortcut of the folder named test (in / home/test) on the desktop, which will be named abc. This can be applied to most of the Linux flavours and not only to Ubuntu. Shakir Memon,

Share Your Linux Recipes!

The joy of using Linux is in finding ways to get around problemstake them head on, defeat them! We invite you to share your tips and tricks with us for publication in OSFY so that they can reach a wider audience. Your tips could be related to administration, programming, troubleshooting or general tweaking. Submit them at The sender of each published tip will get a T-shirt.

If you cannot boot from the DVD/CD-ROM drive, but can boot using a USB device such as a USB pen drive, the following alternative method can be used to boot Linux.

Make your pen drive bootable

may 2013 | 109


A place for thousands of information technology professionals to gather for unparalleled learning, networking and inspiration in a vendor independent setting.


Name of the eveNt


website lasvegas/

May 6 - 10, 2013

Interop, Las Vegas

Mandalay Bay, Las Vegas

May 13 - 14, 2013

Gartner IT Infrastructure Operations & Data Center Summit

This summit offers guidance on turning today's improvements in IT infraGrand Hyatt, Mumbai structure and process efficiency into tomorrow's business advantage. WorldHostingDays is the worlds largest series of hosting events. WHD India attendees will experience the latest products and solutions, while networking with other experts from the hosting and cloud industry. Convention Centre, Renaissance Hotel, Mumbai Elisabet Portavella, Marketing Assistant Tel:+49 221 65008 155 Fax: +49 221-65008165 E-Mail: Mumbai Name:Sanket Karode, Dy Marketing Manager,, Ph: +91 22 61727403 The Westin, Mumbai, Garden City (June 12 - 13, 2013) The Leela Palace, Bengaluru (June 21, 2013) Aboli Pawar, Associate Director, Ph: 9004958990 / 9833226990, aboli@ Mumbai Prashanth Nair, Sr. Conference Producer, Ph: +9180-41154921; contactus@ New Delhi Tikenderjit Singh Makkar, Marketing Manager, Ph: 91 20 6727 6403; tikenderjit. NIMHANS Convention Center, Bengaluru Atul Goel, Sr.Product & Marketing Manager,; Ph: 880 009 4211 Bombay Exhibition Center, Mumbai Sanket Karode, Dy Marketing Manager, sanket., Ph: +91 22 61727403 technology/summits/ apac/data-center-india/

May 27 - 28, 2013

WHD India

June 12 - 13, 2013

Cloud Connect

The Cloud Connect conference and expo brings the entire cloud ecosystem together to drive growth and innovation in cloud computing. Tech BFSI will provide in-depth insights into the latest developments and technical solutions in the areas of analytics, business intelligence, mobile technologies, cloud computing, data centres, collaboration technologies, virtualisation, security and IT management in the BFSI Sector. 'The Global High on Cloud Summit' will address the issues, concerns, latest trends, new technology and upcoming innovations on the cloud platform. The second annual cloud computing summit is bringing back key CIOs onto a single platform in order to overcome the general inertia plaguing the sector.

June 12 - 13, 2013 at Mumbai & June 21 at Bengaluru

6th Tech BFSI 2013 - Transform, Empower & Innovate

http://www.techbfsi. com/

June 12 - 14, 2013

The Global High on Cloud Summit

August 21 - 23, 2013

Fleming Gulf's 2nd Annual Cloud Computing Summit

http://www. conferenceview/2nd-Annual-Cloud-ComputingSummit/464

Nov 11 - 13, 2013

Open Source India

This is the premier open source conference in Asia targeted at nurturing and promoting the open source ecosystem in the subcontinent. osidays/

Nov 27 - 29, 2013

Interop, Mumbai

INTEROP Mumbai is an independently organised conference and exhibition designed to empower information technology professionals to make smart business decisions.

110 | may 2013

Mainstream Enterprise m Enterprise Mainstream E Adoption of ption of Adoption Open Source Databases ce Databases Open Source D

n with Ed Boyajian, A conversation with Ed Boyajian, A conversation with nterpriseDB CEO, EnterpriseDB CEO, Enterpr

ce database Are enterprises Are enterprises software embracing open source today? embracing database software open today?source dat

s customers Absolutely. Absolutely. and In 2012 47 we counted ofIn the 32 2012 of the Global Fortune we500 counted as 1000. customers32 and That 47 of of the the includes Global Fortune 1000. That500 includes as custom the Federal some of the Aviation some biggest IT of users the Administration, in the biggest world. IT operations IT users at thein Federal NIC, the Aviation Fujitsu, world. Administration, ITSonyoperations NIC, Fujitsu, Sonyat the Fed us from Ericsson EnterpriseDB. and Ericsson TCS are all and using Postgres Interestingly TCS or are Postgres allPlus using , from also EnterpriseDB. Postgres noteworthy Interestingly, or Postgres also , noteworthy, Plus from sition of companies Skype), companies like VMware, Apple Microsoft like (through and VMware, its Facebook acquisition Microsoft of Skype), (through Apple (through and Facebook its its (through acquisition its o he beginning acquisition acquisition ofof Instagram) an explosion. are using of Instagram) PostgreSQL. We are at are the beginning usingof PostgreSQL. an explosion. We are at the begi

of traditional Companies Companies are databases, finding that for are a fraction finding PostgreSQL of the cost that of traditional for can a databases, fraction deliver PostgreSQL of the the can deliver cost the of tradi reSQL has sophisticated had sophisticated features decades and capabilities of features they hardening require. and PostgreSQL capabilities and has had development decades they of hardening require. and development PostgreSQL h s as well by a as talented by a and a fast-growing, talented committed community and of committed supportive developers as well community as a ecosystem fast-growing, supportive of developers of ecosystem of as we database database specialists. specialists.

How difficult is it to migrate tois a new database? How difficult it database? to migrate to a new datab

ity solution EnterpriseDB that has developed enables a proven our Oracle compatibility customers solution that to enables run our many customers to run many EnterpriseDB has developed a proven Oracle compatibility solu natively Oracle supports applications using many Postgres of Plus. Oracles Postgres Plus natively system supports Plus. many interfaces, of Oracles system interfaces, Oracle applications using Postgres Postgres Plus natively on. Existing facilitating technical migrations with minimal staff cost, risk from and with disruption. developers Existing technical to staff DBAs from developers to to DBAs to facilitating migrations minimal cost, risk and disruption. Exist d and manage operations teams Postgres leverage existing Plus Oracle databases. to build andexisting manageEnterpriseDB Postgres Plus databases. EnterpriseDB operations teams skills leverage Oracle skills to build and m that begins also has developed with aan comprehensive Oracle migration migration that begins assessment with an Oracle migration and assessment and that be also has developed a program comprehensive migration program ay through provides to support deployment. and assistance with the and processassistance all the way through to deployment. provides support with the process all the way throu

What happens after Postgres databases arePostgres deployed? es are deployed? What happens after databases are

icationsRegardless based of whether on community an organization is deploying PostgreSQL applications based onor community Postgres PostgreSQL or Postgres Regardless of whether an organization is deploying applications hat ensure Plus, EnterpriseDB success. provides a We portfolio have of solutions made that ensure success.long-term We have the long-termthat ens Plus, EnterpriseDB provides a the portfolio ofmade solutions ise with commitment Postgres-specialized to meeting the demands of theproducts, enterprise with demands Postgres-specialized support, products, and support, and commitment to meeting the of the enterprise with w Postgres services. database Whats more, we enhancements are continually developing neware Postgres and database sponsoring enhancements the and sponsoring the Postgr services. Whats more, we continually developing new rganizations efforts of efforts the around PostgreSQL the world More than turn 2,000 organizations to EnterpriseDB around the world turn to for EnterpriseDB ofcommunity. the PostgreSQL community. More than 2,000for organizat Postgres-related products and services. products and services. Postgres-related


How do customers contact EnterpriseDB?contact How do customers EnterpriseDB? Hurry! Offer expires September 30, 2012 Customers for can a check wide our web array site at of our information forsite a wideat array on of information our on our Customers can check web -mail productsproducts and services, call +91 20 3058 9500 or e-mail with questions or questions and services, call +91 20 3058 with 9500 or or e-mail sa comments. comments.

bscriptions Contact Technical us Contact today about : Support Software Subscriptions 24x7x365 Technical Support 24x7x365 Subscript us today about : Software trators and Migration Developers Training for Professional Administrators and Services Professional Services Assessments Migration Assessments Developers Training for Administrators * US Only), Email: Call: +1 781-357-3390 1-877-377-4352 (US Only), Email: Call:or +1 781-357-3390 or 1-877-377-4352 (US Onl

oftware India Private EnterpriseDB Limited Software India Private Limited EnterpriseDB Softwar
Avail free cloud credit worth ` 25,000*, visit for more details

oor , Godrej Castlemaine, Unit Sassoon # 3, Ground Floor, Road Godrej Castlemaine, Pune Unit Sassoon 411001 # 3, Road Ground Pune 411001 Floor, Go Test, develop and deploy your application on VMware vCloud powered cloud 0 F +91 20 3058 9502 T +91 20 3058 9500 F +91 20 3058 9502 +91 20 3058 9500 F +91

Anda mungkin juga menyukai