Anda di halaman 1dari 9

2013 IEEE Seventh International Symposium on Service-Oriented System Engineering

iScreen: A Merged Screen of Local System with Remote Applications in a Mobile Cloud Environment
Jianxin Li, Qi Song, Weiren Yu, Chunming Hu, Jian Kang School of Computer Science and Engineering Beihang University Beijing, 100191, China {lijx, songqi, yuwr, kangjian, hucm}act.buaa.edu.cn
Abstract With the convergence of cloud computing and mobile computing, mobile devices can access remote applications in a cloud environment. However, existing research work mostly focused on leveraging cloud capabilities to enhance mobile clients. Particularly, in order to access different cloud platforms and applications, specific version of clients such as Web portal, Remote Desktop, are generally required. The original display and interaction experience on the client local system are changed. This paper presents an approach named iScreen which keeps a consistent display and interaction experience between local and remote applications in a mobile cloud computing environment. iScreen consists of a three-factor merging framework including windows merging, meta-info merging and interaction merging for applications. We developed a prototype of iScreen for Windows applications to allow thin clients to seamlessly access remote cloud windows applications. Experimental studies show that iScreen can effectively merge local desktop with remote display, and mobile clients can achieve 20 frames per second when running a remote video playback application. The bandwidth usage of iScreen is reduced by about 10% compared to UltraVNC. It performs better especially under high-motion scenarios. Key words: cloud computing, remote screen, merged screen, tranmission protocol

I.

INTRODUCTION

Information at your fingertips anywhere, anytime is a driving vision of mobile computing. In recent years, we have witnessed the rapid advent of mobile computing, millions of users in the world use mobile devices such as Windows Phone, iPhone, Android, etc. to access the Internet and remote applications. There are lots of driven technologies to show why which developed rapidly. First, with the upgrade of network infrastructure especially the rapid development of 3G networks, mobile network bandwidth has been greatly improved. Second, Cloud computing platform provides elastic service management of large-scale resources, effectively supports the execution of remote applications and the management of application data. With these developments, remote software execution technology based on desktop virtualization has been widely adopted due to its easy-to-use, easy-to-maintain and cross-platform advantages. In a cloud platform, remote servers execute all applications and transmit the execution result to the client. The device acts as a remote display, capturing user input and rendering display updates received from the server. There are some representative work utilize cloud computing infrastructure are: open-source VNC systems,

XenDesktop developed by Citrix, THINC developed by Columbia University [1], and Muse, CyberLiveapp by Beihang University [2][3]. As more software executed in a cloud, client devices will execute local and remote software at the same time. Cloudclient convergence is becoming one of the phenomenal modes supported by cloud platforms. However, current systems and technologies mainly focus on extending the mass storage and computing capability of mobile devices, and transmitting the whole desktop of virtual machines running different applications. To the best of our knowledge, there is few research work on high-performance application delivery and in-depth convergence of cloud-client display for provide high quality of experience to the cloud users. We have identified two key issues as follows. First, current research can meet the basic performance requirements of cloud-client convergence. Many systems transmit different remote desktops to the end users simultaneously but they cannot be effectively integrated with local clients display system. Without an effective merging mechanism, user experience can be significantly degraded when cloud and local client do not share similar operating systems, and in scenarios when different cloud service providers customize their own access terminal or portal which is independent of rather than integrated with the local client system. For example, many Google apps require the use of Google Chrome OS to access. The execution environment relies solely on the browser itself and separated from the users operating system. Clearly, it has become a key issue for cloud-client convergence to provide client devices with integrated and interactive experience. Second, as the core connection channel, remote interactivity protocol needs to deal with interaction events and dynamic display updates. However, current systems directly rely on remote desktop systems designed for traditional LAN environment. They have not considered the characteristics of mobile devices and mobile network. Many remote display applications, e.g., Android VNC, cannot effectively support the video playing on mobile equipment. Therefore, how an integrated view could provide highperformance interactivity between cloud and client becomes another key problem. In this paper, we propose an approach named iScreen, which merged local system with remote applications through a three-factor display merging framework. The major contributions of this paper are three-fold: 1) We developed a cloud-platform multi-application display merging framework, called iScreen, to achieve seamless display integration. It consists of window merging, meta-info merging, and interaction merging to achieve
508

978-0-7695-4944-6/12 $26.00 2012 IEEE DOI 10.1109/SOSE.2013.22

seamless display integration. Retrieving related application windows, application display updates are limited to window display updates. Application icons and taskbar icon status are synced and integrated with the local display systems to achieve cloud-client application integration. 2) We have developed a prototype and evaluated our prototype implementation of iScreen on metaVNC 0.6.6 for Windows applications. Evaluation shows that, with our approach, local desktop can be effectively merged with
1.App Windows Merging 2.App MetaInfo Merging (Icon, Taskbar etc.) 3.App Interaction Merging (Mouse, Keyboard etc.)
1) End Client
Merged Cloud Application with local OS App2
App1

remote display, and achieve 20 frames per second when running a remote video playback application. The bandwidth usage of iScreen is reduced by about 10% compared to UltraVNC on average. The rest of this paper is organized as follows. The design of iScreen is discussed in Section 2. System implementation is shown in Section 3. Evaluation results are described in Section 4. Related works are reviewed in Section 5.A brief conclusion is given in Section 6.

App2
App3

2) Mobile Phone
Transmission Protocol for Cloud Application

App2 App1

App3

Screen Transmission with H.264 protocol

Internet

VM

VM

Cloud Centre

VM

Figure 1 A Scenario of iScreen

II.

DESIGNING OF ISCREEN

2.1 System Architecture of iScreen The main function of iScreen is to achieve display and interaction merging. As shown in Figure 1, different applications running on the cloud are transmitted to the client. To achieve better user experience, a three-factor merging framework including App Windows Merging, App MetaInfo Merging and App Interaction Merging is used to merge the local OS with the cloud desktop.
User-Input transformer Window Rgn capturer Screen capturer Server Network Screen encoder Receiver

Sender Screen sender

Furthermore, screen transmission with H.264 protocol is used in the mobile client. Figure 2 shows the system architecture of iScreen. With iScreen server captures and encodes the window and icon of an application. It also receives and decodes control messages from a client, and performs correlated operations. While client analyses and decodes a desktop image, builds display region, and creates icons at its local taskbar. It also captures and encodes mouse and keyboard events, and sends them back to the server. To provide better experience for clients who use multiple applications simultaneously, a display merging framework is proposed. For mobile clients, H.264 protocol and an adaptive codec mode switch function are used for their screen transmission. 2.2 Display Merging Framework for Multi-application The display merging framework for multi-application is shown in Figure 3. It consists of App Windows Merging, App MetaInfo Merging and App Interaction Merging. iScreen captures the whole screen display and obtains the rectangle region information for each application. In the client, system merges the remote window and local window to display a whole desktop. For meta-info merging, iScreen needs to build real time taskbar icon for each application and achieve the same display effect as a local one. Event listener obtains the keyboard and mouse operation to determine whether to transmit it or to switch the input method. z App Window Merging Approach App Window merging is the key part of iScreen. Intuitively, window merging can be done by independently

Previous screen frame In codec buffer

Client Screen decoder Screen receiver

Window presenter

Window Rgn receiver Sender

User-Input transformer

Figure 2 System Architecture of iScreen

509

capturing each applications running window in the operating system and sending them to a client for display. However, it is infeasible for the new OS after Windows Vista. They do not provide an interface for real-time window capture. To address this challenge, iScreen captures the whole desktop display of the server, acquires the information
Client Visualization Interface
Event Manager Key Mouse
Mouse operation, key and input method display

of the regions that sever application windows cover. To reduce the compression time and output data volume, server wallpaper is made black. Both the whole desktop display and the area information are transmitted to the client. The client uses the information to build a window with the same appearance as the server but with transparent wallpaper.
Server Application
Key Mouse Event Manager
Control Command Update Request

Network Interface

Mode Switcher

Network Interface

TCP Connection
Updates Window attribute message Desktop and window range message

3 H264
Decoder

3 VNC
Decoder

VNC Decoder

H264 Decoder

Meta-info Merger
Window Merger

Meta-info Merger
Window Merger

Frame analysis Mode decison

Screen Capture

Frame Buffer

Frame Buffer

Figure 3 Display Merging and Codec Mode Switch Framework

When the resolution between a server and a client is different, the display will be drastically changed. For example, if the resolution of a server is 1280*1024, and the resolution of a client is 1280*800, the taskbar is covered by the application window as shown in Figure 4. Thus the taskbar is unable to be clicked to switch among different windows. And while the resolution of a server is 800*600 and the resolution of a client is 1280*800, the active drag operation at the client may cause the application window at the server to exceed the desktop area as shown in Figure 5. In these two cases, the client sends its resolution to the server. And the server will adjust its resolution accordingly.

Figure 4 The Display Deletion Caused by The Server Resolution is small

Figure 5 Display Shaded Caused by The Server Resolution is large

z Meta-info Merging Approach iScreen builds real time taskbar icon for each window and provide the same experience to the local client. iScreen uses two methods to extract the application icon. Firstly, it uses the window handle directly. While traversing desktop application windows for area information, meta-data of the applications is retrieved. By using window handles, window titles, taskbar status and application icon can be accessed. Secondly, iScreen tries to obtain the window process ID and further obtain application directory to access the icon file when the first method cannot provide necessary information. z Interaction Merging Approach The interaction integration refers to the integration of mouse and keyboard operation. Our approach emphasizes integrating transparent non-window area, mouse/keyboard event judgment and input method. Transparent non-window area and mouse event judgment is all about the windows area. As described in previous section, the server obtains and transmits the area information of the desktop windows, and the client builds a new window according to the area. And for the judgment of mouse events, the client checks if the mouse events occurs within server windows area. If true, the client will send the mouse event to the server. Keyboard events are sent depending on the current focus of client windows. The input method integration assists users to switch input method to the local one. Short-cut key combinations always have conflicts on the server and the client sides. A semitransparent button event is created for this purpose. Once the button is clicked, a message is sent to the server which will execute the switch operation.

510

III.

SYSTEM IMPLEMENTATION

3.1 Implementation of iScreen z Windows Merging Windows has a Region class that provides AddRect, Combine and SubtractRect methods to build an irregular region with rectangular areas. The region of all windows on the desktop is treated as transparent region and some communication functions are defined.

window state 7: void UpdateWindowIcon(HWND hWnd); //update window logo

(a)The screen display in the server before the connection

(b)The screen display in the client before the connection

The function of SendWindowState is to transform the parameters and package them to the data structure included by rfbWindowStateMsg .Then SendQueued is used to transmit the data. SendAllWindowState traverse the windows and call GetWindowName to get the name and length of a window. Using these as parameters, SendWindowState are called to transmit the name and state. Figure 7(a) and Figure 7(b) shows the taskbar state before the connection has been build. Then the server sends the name and other messages to make client build its own taskbar, the effect is shown in Figure 7(c). As the identification message is sent after the frame buffer message, the established process needs a delay which is equal to one frame update.

a : server taskbar before the connection

(c)The screen display in the server after the connection

(d)The screen display in the client after the connection

Figure 6 The Implementation Effect Of Window Merging

The functions are list as follows:


1: void AddRect(const RECT &rect); //build a region 2: void SubtractRect(const RECT &rect); //delete a region 3: void Combine(const vncRegion &rgn); //combine two regions 4: void Subtract(const vncRegion &rgn); //delete some part of a region 5: BOOL GetBoundary(RECT &rect) ; 6: BOOL SendTpIncRgn(vncRegion& rgn); //send the changed area 7: BOOL SendTpDecRgn(vncRegion& rgn); 8: void UpdateTpRgn(const vncRegion& incRgn, const vncRegion& decRgn); 9: void ClearTpRgn();//clear up

b: client taskbar before connection

To adjust display resolution, the system extends the rfbClientInitMsg structure of VNC with width and height information of the clients screen. Server changes the resolution after the connection has been built. Client resolution is received and returns to the original when the connection is done. z Meta-info Merging There are three aspects of this implementation, name, taskbar state and application icon. These two aspects share a common procedure as they traverse the windows using EnumWindows functions. The functions needed are list as follows:
1: BOOL SendWindowState(ULONG id, int state, const char *windowname, int namelen); //send the state message 2: virtual BOOL GetWindowName(HWND hWnd, char *name, int *len); //get the window name 3: virtual BOOL CreateWindowIcon(HDC hMemDC, HWND hWnd, rfbServerDataIconPrefReq iconPref, rfbServerDataIcon **iconData, ULONG *iconDataLength); //build the logo information 4: BOOL SendServerIconData(ULONG id, int dataType, rfbServerDataIcon *iconImage, ULONG iconDataLength); //send logo message 5: void SendAllWindowStates();//send state and logo message 6: void UpdateWindowState(HWND hWnd, ULONG state); //update

c : client taskbar after the connection Figure 7 The Implementation Effect of Logo Merging

z Interaction Merging During the interaction integration, the server obtains the region of application windows and encapsulates them as an independent data structure rfbTransRectHeader. When client receives this message, it calls ReadTransRect decode the message and restore the region information. The client builds server region with rects into variable m_hTransRgn and calls SetWindowRgn to set client application window region. iScreen uses HRGN rather than RECT to represent irregular shape. In order to achieve input method switch, iScreen uses shortcut key to perform the operation. rfbWindowControlMsg adds a new value named rfbWindowControlChangeIME and the client sends rfbWindowControlMsg after it captures an operation.

511

The server analyzes the message and determines whether the switch will happen. 3.2 Implementation of High Performance Transporting Protocol for Mobile End The server implementation is introduced in Muse, we only provide the information of H.264 decoder in Android client. The whole decode and display process is as follows: Step1: The client determines whether to choose H.264 decoder based on codecMode in rfbUpdate, which is newly defined. If codecMode is 1 then read length to h264DataLen as the length of H.264 bit stream. Based on the length, the whole H.264 bit stream is read into SockBuf. Step2: After obtaining the whole bit stream, the client divides the frame data into NAL units and sends the Units into the decoder. The division is based on the start word of NAL(0x00000001). The divider reads 2048 bytes every time to sockbuf, then using MergeBuffer() to distinguish 0x00000001 and starts division. DecoderNal decodes every NAL units and copyPixelFromBuffer transmits the data to a Bitmap. Step3: The client uses setImageBitmap to load Bitmap into ImageView for display. Considering the RFB edition difference and compatibility, the server needs to distinguish whether the client support Heuristic function. We define that the new edition as 3.9. After the initialization, the client reads the frameBufferUpdate and obtain the codecmodeto choose the deocoder. IV. EXPERIMENTAL STUDY

Other toolsused in the experiment include: Network Emulator for Windows Toolkit to emulate different network configurations, AutoHotkey for maintaining the same operation among different tests. The CPU bandwidth consumption are measureded by Windows Performance Monitor in windows and AnotherMonitor in Android client. 4.2 The Performance of Display Merging The update region decreases with window merging, reducing from a whole desktop to an independent window. Consequently, data traffic between server and client are reduced We compared iScreen with UltraVNC to examine the data traffic volume. A four minutes video with 640*360 resolution is played as the test sample. The result is shown in Figure 9.

Figure 9 Bandwidth usage by iScreen and UltraVNC

In this section, we present our evaluation results We examine four aspects of iScreen: (i) performance of window merging, (ii) bandwidth consumption comparison after the resolution adjustment, (iii) delay of input method in interaction integration, and (iv) the server CPU, client CPU and bandwidth consumption of Muse android client under three different scenarios. 4.1 Experiment Environment

As shown in Figure 9, during the video playback, iScreen consumes much less bandwidth than UltraVNC. At the beginning phase, the bandwidth occupied by iScreen is relatively higer because the update range is the whole screen. As the display region becomes a window, the bandwidth usage becomes significantly lower. It demonstrated that our mechanism performs better under low bandwidth. 4.3 Bandwidth Usage Experiment after the Resolution Adjustment In real application scenarios, the display resolution in the server is always higher than the client. After the adjustment the bandwidth usage should decrease. The resolution is 1280*1024 originally and becomes 1280*800 after the adjustment. The scenario provides here is a whole screen scenario. Figure 10 shows that our system is effective. The bandwidth usage changes with the display resolution.
250000 200000 150000

Figure 8 Experiment Environment Setup

100000 50000

In our experiment, a Windows7 X64 PC is used as a server with Intel(R) Core(TM)2 Quad CPU Q9550 @ 2.83 GHz and 4GB RAM. An Android 4.0 tablet device with Exynos 4210 @1.4GHZ is used as client 1 to test Muse client, and a Windows7 Ultimate x86 laptop is used as client 2 to test iScreen. Figure 8 shows the experiment setup.

0 1 16 31 46 61 76 91 106 121 136 151 166 181 196 211 1280*1024 1280*800

Figure 10 Bandwidth Usage Experiment Of Resolution Adjustment

512

4.4 Delay Caused by Input Method Switch The delay from a click on the keyboard to the input method display on the screen consists of three components: the reaction time of the keyboard, the response time, and the transmission time between the control signal and the display. The response time is very small that can be ignored. Figure 11shows the total delay. It can be calculated as follows: T=T1+T21+T22 We use KeyboardTest to test the keyboard delay T1 and use approximate similar way to measure T21 and T22. As we can get the range of input method window, ping l order can be used to simulate the delay. The data is 8K bytes and T2 is about 102 ms.
Client Server

almost static during the playing, thus the frame rate and bandwidth usage is very small but the experience is bad. The results under 54Mbps are shown in Figure 13. z Server CPU Usage The average server CPU usage is shown in Table 1. As shown in Table 1, low network bandwidth decreases the CPU usage since low data transmission increases the update intervals. MobileiScreen uses more CPU than Android VNC because H.264 codec needs more computing resources. It indicates that MobileiScreen is applicable for a server with high CPU resources.
Table 1 Server usage results Text editing Browsing
Mobile iScreen

Video playing
Mobile iScreen

Android VNC 25.08% 21.71%

Mobile iScreen

Android VNC 27.76% 24.19%

Android VNC 27.99% 25.61%

54Mbps 1Mbps
Keyboard reaction delay

46.07% 43.62%

50.51% 51.63%

54.79% 53.61%

Data transmission delay

Figure 11 Delay Caused By Input Method Switch

Figure 12 shows the delay test result. With small area update, network delay takes 37.34% of the total delay which is within the normal range. This result shows that the input method switch is feasible.

z Client CPU Usage The average client CPU usage is shown in Table 2. As shown in Table 2, as the display intensiveness of the server screen increases, more computing resources are needed to decode the data. Consequently, more client CPU usage occurs in high motion scenes for both MobileiScreen and Android VNC. Under low bandwidth environment, the bandwidth has little affected the CPU usage. MobileiScreen and Android VNC take very similar bandwidth usage. It demonstrates that MobileiScreen takes less CPU, and thus, is more suitable for mobile client.
Table 2 Client usage results Text editing
Mobile iScreen

Browsing
Mobile iScreen

Video playing
Mobile iScreen

Android VNC 46.11% 42.65%

Android VNC 36.69% 37.68%

Android VNC 44.29% 44.36%

54Mbps 1Mbps

32.26% 30.92%

36.76% 39.23%

31.02% 38.33%

Figure 12 Total Delay Time

4.5 Performance Comparison Test under Different Application Scenarios We compare iScreen and AndroidVNC (Build 203) with the server resolution is 1024*768. The experiment involves three test groups in 54Mbps and 1Mbps network environment: 1) Text editing: test time is 60s with some input and scroll operations. 2) Browsing: test time is 60s with some scroll operations using Google Chrome in the client 3) Video playing: test time is 64s with a whole screen video playing. The test groups above are tested both under 54Mbps and 1Mbps network environment. One thing should be consider is that as AndroidVNC cannot support video playing, the screen of AndroidVNC is

z Bandwidth Usage The average bandwidth usage is shown in Table 3. As shown in Table 3, in high-motion scenarios, as the input image complexity increases, frame rate is decreased. MobileiScreen uses less bandwidth to transmit data. This is the main advantage brought by H.264 codec. When in lowmotion scenarios, the bandwidth usage of MobileiScreen is often less than AndroidVNC. MobileiScreen is more suitable for low bandwidth network environment. One phenomenon should be considered here is that when playing video, the display of AndroidVNC is almost stopped. Thus the bandwidth usage is very low.
Table 3 Bandwidth usage results Text editing
Mobile iScreen Android VNC

Browsing
Mobile iScreen Android VNC

Video playing
Mobile iScreen Android VNC

54Mb ps 1Mbps

7580 2307

27088 1766

13675 2837

36482 13882

58485 14052

21435 12977

513

z Display quality test We use the number of frames processed by the client per minute to represent the display quality, and the experiment is tested using three test groups. The results are shown in Figure 13. As shown in Figure 14, in high-motion scenarios, frame rate is decreased since the complexity increases the decode

time. AndroidVNC performs well under low-motion scenes but very badly under high-motion scenes. Clearly, MobileiScreen is more applicable for high motion scenes. MobileiScreen also better adapts to low bandwidth scene when a small change of the frame rate occurs. MobileiScreen is better than Android VNC in low bandwidth networks.

(a) Server CPU Usage

(b) Server CPU Usage

(c) Server CPU Usage

(d) Bandwidth Usage

(e) Bandwidth Usage

(f) Bandwidth Usage

(g) Client CPU Usage

(h) Client CPU Usage

(i) Client CPU Usage

Figure 13 Performance Comparison Test under Different Application Scenarios

V.

RELATED WORK

Figure 14 Display Quality Test Results

VNC and THINC are the two representative remote interactive systems recently launched by the researcher in academia. There are also Microsoft Remote Desktop, Citrix XenDesktop, VMware View, Sun Ray, and HP Remote Graphics in the industrial area. In addition, some different interaction protocols are proposed. Remote interactive systems research is mainly carried out in the following three aspects: z Remote View Protocols The reprehensive projects on desktop virtualization include THINC [4], Citrix XenDesktop [14][15], Microsoft

514

Terminal Service [14] and some VNC systems [16-18]. THINC is a remote display system architecture for highperformance thin-client computing in both LAN and WAN environments. THINC enables higher-level graphics primitives used by applications to be transparently mapped to a few simple low-level primitives that can be implemented easily and efficiently. Citrix provides full VDC (Virtual Desktop Computing) using their ICA protocol in parallel with Ardence image and provisioning manager and desktop server hypervisor. Recently, XenClient extends the benefits of desktop virtualization to mobile users offering improved control for IT with increased flexibility for users. RDP enhancements in Windows Server 2008 and in recent MS client Operating Systems will also address some of the problems identified in relation to video and other graphicsintensive applications over RDP. VNC is based on the PRB protocol which is a simple and powerful remote display protocol. Unlike other remote display protocols such as the X Window System and Citrix's ICA, the VNC protocol is totally independent of operating system, windowing system, and applications. RealVNC [17] proposes different remote display solutions for the client access, the software is executed at remote servers; user client just gets the presentation desktop. This solution only focuses on the separation of execution and presentation, and doesnt involve the software deployment and execution related fields. MetaVNC [16] pursues a remote desktop environment that users can control applications on different hosts seamlessly. MetaVNC is a window aware VNC, and it merges windows of multiple remote desktops into a single desktop screen. z Encoding Methods As the core of the remote interaction system, codec method directly affects the delivery performance and user experience of the system. Efficient codec method can guarantee low-latency and low-bandwidth delivery. Several codec currently used are as follows. Drawing instructions: instruction-based encoding method is represented by THINC [4] and RDP protocols. It handles user feedback and sends the display results to the client by drawing primitives. The client reconstructs and displays the images according to the draw primitive. The instructionbased encoding requires an appropriate hardware and virtual display driver in the client. While in complex applications, one needs to transfer a large amount of drawing data, which could exceed the computing capability of mobile devices. Static image compression method: static image compression method transfers each frame to the client using image compression. The client decompresses and displays the compress images. Improvement is taken for image compression [5]. It divides the entire screen into blocks of 16 * 16 pixels, which is divided into skip blocks (not encoded), image blocks (image coding), and text blocks (text encoding) by the pixel detection. The client then restores all the blocks into a complete image. This method saves the bandwidth effectively. Video encoding method: static image compression cannot adapt to the high dynamic application scenarios well, as a result, video encoding method has been proposed. Video codec consists of MPEG codec [6], [7], [12] and H.264 [8]

codec. MPEG codec has a simple codec method and a good quality, but occupies a larger bandwidth because of the low compression proportion. On the contrary, H.264 codec can save bandwidth but requires a higher codec capabilities resulting in higher capability requirements on mobile hardware. Hybrid encoding method: video encoding method can offer a better user experience, but may bring a large computational complexity. Thus, hybrid coding methods have been proposed for varying characteristics of the application scenarios [9][10][11]. Hybrid coding method obtains the high motion area or low motion area according to the varying quantities of the specific region pixel, and apply video encoding or static image encoding respectively. The client will restore different coding regions into a complete image. In addition to detect the pixel changes, it can adopt the CRC check of the pixel data to determine whether the region has changed [13]. Hybrid encoding method can switch between different encoding methods automatically, save system resources, and enhance user experience. z System Structure Optimization Research has been conducted on the system structure; precomputation display update [18] and the scene object cache provide effective way to reduce the transmission delay. Precomputation display update provides the current state of the application. The application server can predict the potential display updates and pre-sent it to the client or using prerequest technology to require a regular update on the basis of the traditional VNC [19]. Different networks and applications can be used to determine different pre-request interval. The interaction delay can be also reduced. While for many static applications, such as office applications, the size of the update area on the screen is very limited, therefore, the number of server update will be more finite. Scene description language similar to MPEG-4 BIFS is especially suitable for the client processing which supports user input [20]. The client not only accepts the image update but also knows the scene structure and its describing object, as if the user is manipulate these projects. In [21], the current ClassX system is extended to deliver high-quality interactive video to the smartphones multi-touch screen. VI. CONCLUSION AND FUTURE WORK

With the development of desktop virtualization technique, more and more users would access the virtual desktop using mobile devices. However, the integration between remote and local systems is not very effective. The characteristic of mobile scenes have not been well taken into considerations in current techniques To address these challenges, we design and implement a flexible merging system named iScreen with three major factors, window merging, meta-info merging and interaction merging, to achieve the seamless display integration. The presentation mode of remote desktop in desktop virtualization environment has changed from whole screen display to window-based display. It is easy for the delivery and control of customized software for application providers. The proposed H.264-based encoding system, MobileiScreen,

515

is developed with adaptive codec mode switch. Experimental studies show that our system achieves better performance compared to existing systems like UltraVNC and AndroidVNC. Our future work includes codec optimization, especially hardware acceleration and network optimization with multichannel transmission, and mobile network performance optimization so as to achieve much better user experience. . Another direction would be on Multi-tenant support for system architectural improvement. ACKNOWLEDGMENTS We would also like to thank Yan Bai of University of Washington Tacoma for her suggestions and helps. This work is partially supported by Program for National Nature Science Foundation of China (No. 61170294, 61272165), National Key Technology R&D Program (No.2012BAH46B04), and New Century Excellent Talents in University 2010 and Beijing New-Star R&D Program (No.2010B010). REFERENCES
[1] Ricardo Baratto, Leonard Kim, and Jason Nieh. THINC: A Virtual Display Architecture for Thin-Client Computing,Proceedings of the 20th ACM Sym-posium on Operating Systems Principles (SOSP), October 2005. Weiren Yu, Jianxin Li, Chunming Hu, Liang Zhong. Muse: a multimedia streaming enabled remote interactivity system for mobile devices[A]; Proceedings of the 10th International Conference on Mobile and Ubiquitous Multimedia[J],2011:216-225. Jianxin Li, Yu Jia, Lu Liu, Tianyu Wo: CyberLiveApp: A secure sharing and migration approach for live virtual desktop applications in a cloud environment. Future Generation Comp. Syst. 29(1): 330340 (2013) Ricardo Baratto, Leonard Kim, and Jason Nieh. THINC: A Virtual Display Architecture for Thin-Client Computing,Proceedings of the 20th ACM Sym-posium on Operating Systems Principles (SOSP), October 2005. Huifeng Shen,Yan Lu,Feng Wu,Shipeng Li:A High-Performanance Remote Computing Platform, Proceedings of the 2009 IEEE International Conference on Pervasive Computing and Communications, IEEE Computer Society,2009:1-6. Lamberti, F., Sanna, A. A Streaming-Based Solution for Remote Visualization of 3D Graphics on Mobile Devices[A]; IEEE Transactions on Visualization and Computer Graphics , Volume 13 Issue 2; IEEE Educational Activities Department, Jan.2007:246-260.

[7]

[8]

[9] [10]

[11]

[12]

[13]

[14]

I. Nave et al., Games@Large Graphics Streaming Architecture Proc. IEEE Intl Symp. Consumer Electronics (ICSE 08), IEEE Press, 2008, pp. 205-208. D. De Winter, P. Simoens, L. Deboosere, F. De Turck, J. Moreau, B. Dhoedt, P. Demeester. A Hybrid Thin-Client protocol for Multimedia Streaming and Interactive Gaming Applications, Proceedings of the international workshop on Network and operating systems support for digital audio and video ,2006. Sheng Liang. Java Native Interface: Programmer's Guide and Specification[M]. Prentice Hall. 1 edition (June 20, 1999) Imai, T., Horio, K., Ohno, T., Matsui, K. An adaptive desktop transfer protocol for mobile Thin Client, IEEE GC Wkshps, Dec. 2010 Simoens, P., Praet, P., Vankeirsbilck, B., De Wachter, J.,Deboosere, L., De Turck, F., Dhoedt, B., Demeester, P. Design and implementation of a hybrid remote display protocol to optimize multimedia experience on thin client devices, IEEE ATNAC, Dec. 2008 Joveski, B., Mitrea, M., Preteux, F. MPEG-4 LASeR - based thin client remote viewer; 2010 2nd European Workshop on Visual Information Processing (EUVIP)[J], Jul. 2010:125-128. Kyungtae Han, Zhen Fang, Paul Diefenbaugh, Richard Forand, Ravi R. Iyer, Donald Newell; Using Checksum to Reduce Power Consumption of Display Systems for Low-Motion Content[A]; IEEE International Conference on Computer Design, 2009:47-53. T. W. Mathers and S. P. Genoway, Windows NT Thin Client Solutions: Implementing Terminal Server and Citrix MetaFrame, Macmillan Technical Publishing, Indianapolis, IN, Nov. 1998.

[15] Citrix Systems - Virtualization, Networking and Cloud.

[2]

http://www.citrix.com/
[16] MetaVNC, a part of the Collective at Stanford University,

http://metavnc.sourceforge.net/
[17] RealVNC

[3]

VNC http://www.realvnc.com/

remote

control

software,

[4]

[5]

[6]

[18] Taurin Tan-atichat, Joseph Pasquale; VNC in High-Latency Environments and Techniques for Improvement[A],IEEE Global Telecommunications Conference[J], Dec 2010:1-5. [19] John R. Lange, Peter A. Dinda, Samuel Rossoff, Experiences With Client-based Speculative Remote Display[A], USENIX 08 [20] M. Mitrea et al., BiFS-Based Approaches to Remote Display for Mobile Thin Clients[A], Proc. SPIE, vol. 7444, 2009, p. 74440F, doi:10.1117/12.828152. [21] Derek Pang, Sherif Halawa, Ngai-Man Cheung, and Bernd Girod, ClassX Mobile: region-of-interest video streaming to mobile devices with multi-touch interaction. In Proceedings of the 19th ACM international conference on Multimedia (MM '11). ACM, New York, NY.

516

Anda mungkin juga menyukai