Hardware emulators are software implementations of (past) computer architectures to reproduce original environments for a wide range of purposes. Most emulators and virtual machines are programmed for local access, which means that they are installed on the machine the user is working on. The screen output gets redirected into a window or the whole screen of the host system as the input devices like mouse and keyboard are redirected to the emulator upon focus.This is convenient if the emulator is available for the host platform the user would like to use. To actually run an original environment within an emulator, usually a wide range of dependencies is to be met and specific knowledge is required. Nevertheless, there are a couple of reasons why a machine has to be accessed remotely. The concept of remote keyboard video mouse (KVM) has been known for more than 30 years of computer center operation, originally developed for server remote access. As hardware solutions are often limited a couple of software solutions are available for a while, the most prominent and most widely available being VNC. But, partially because of its roots and original purpose, VNC has a couple of limitations like restricted video capability, audio being not considered as well as alternate input devices apart from mice and keyboards.
Remote access to emulation (services) has been discussed and researched for a while. With new ideas arising and new services becoming widely available like streaming and platform/software as a service in the cloud it is time to revisit the topic of remote emulation. After researching into migration-through-emulation services the bwFLA project brings the topic of (automated) long-term access which started with GRATE and GRATE-R back during PLANETS into focus again.
The remote access to (virtual) machines should be revisited considering the actual developments in today’s information technology. More and more devices are getting mobile, but not necessarily more powerful compared e.g. to the advances of the desktop and laptop computers over the last 25 years. They are limited by the battery technology and requirements towards their physical dimensions. Tablet devices and smart phones thus do not fancy the desktop CPU power, RAM and hard disk capacity and typically run different operating systems compared to their big counterparts. Nevertheless, more and more people like to access today’s and past systems via their new mobile devices. In the long run, those devices get replaced too and become obsolete after a while. Mobile and embedded devices are the next generation computer architectures, which require to be preserved as artefacts of our history. Thus, suitable methods are needed to make them accessible over longer periods of time. Different to their desktop counterparts they usually have different means of input. The touch screen and an on-screen keyboard kind of translate them to the traditional mouse plus keyboard model. But, new types of input found on mobile devices and game consoles like GPS, gyroscope or position sensors
Abstracting Remote Access
Remote access to emulated renderings of original environments will most certainly become a crucial factor for emulation-based presentations of artefacts. There are a couple of reasons to look into average machine and device remote access:
- The setup and maintenance of emulation backed original environments require software components and knowledge which is usually not available to the average user (e.g. download, configure packages of emulators and original environments)
- The gap between actual and deprecated technology widens thus requiring a very generic API to be compatible over longer periods of time (VNC is a good example for at least a subset of the required functionality)
- The number and variety of access devices will grow: Traditional desktop systems will get complemented by mobile devices or even systems like smart TV's
- The cloud paradigm pushes new types of devices and services: Software as a service gains more relevance and many complex tasks are shifted from the relatively weak end system to the much more powerful cloud (offloading)
- Offering access to a wide range of different systems ranging e.g. from old home-computers, Unix workstations, PC's, Macintoshes to the latest mobile platforms and game consoles using mostly the same technology
- Offering easy suspend-resume even with totally different users from different platforms accessing the same artefact after each others (or at the same time)
Optimally, instead of installing a large number of software packages which are difficult to maintain even for a small number of platforms, the user should simply set up a single application or mobile app to access today’s and future services’ abstracting from and translating the actual capabilities of the chosen remote platform. This could result optimally in a solution which allows the access to a 1985 home computer game running on an Atari ST emulator, access to mid-2000 Linux, Windows or Solaris desktops, access to Nintendo Wii and some modern 3D game through (mostly) the same interface. Of course, this application has to adapt to different input and output methods.
Separating Artefacts from Access
Remote access to virtual machines and emulation can significantly help to separate content (e.g. the artefacts) from the user accessing it. This concept is not new but often deployed in business environments using restricted terminal services. The architecture would simply black box the original environments and their components into the controlled environment of the service provider. Thus, no copyrighted or restricted material should be required to be transferred to the user side:
- The artefacts bitstream (including network traffic of distributed applications) never leaves the controlled environment, just the renderings, which could be controlled by themselves using recent technology (reduced quality in audio, video, 3D; watermarking)
- The required components, often copyrighted, license-wise restricted by themselves to reproduce the original or some compatible environment for the artefact do not need to be shipped and installed on the users systems. For instance, the KEEP emulation framework demonstrates the legal and technical challenges faced, when the original environment needs to get deployed on the user’s machine.
For separated access the user just needs a generic remote access client, which should be very generic and made available for a wide range of different architectures. This would allow memory and other content holding institutions to more easily offer remote access to their artefacts.
Depending on the characteristics of digital artefacts, like complete systems imaged, issues of privacy become relevant. By holding the artefact and its environment in a controlled system there are much less vectors to release critical data.
Translating New Types of In- and Output
While the IT world has been rather static regarding input and output channels a couple of new developments have changed the IT landscape significantly over the last couple of years. For the administration of servers it was very helpful to be able to remotely mount an optical disc drive or USB device from the administrators desktop machine and transport the data over the same channel as the KVM information.
Remote desktop access not only requires video, but optimally audio to be able to travel back and forth. The same is true for totally new sources of input like GPS signals, position sensors, electronic compass or gyroscope. Other types of input need translation, if e.g. a PC desktop is accessed from a tablet device. Touch gestures are to be translated to mouse movements and on-screen keyboards are to be offered to use the keyboard available on the original platform. The keyboard even might need translation between different layouts regarding languages and function keys. Another requirement which might arise is the real-time capability of the interaction, e.g. in fast electronic games. Depending on the remote system accessed different bandwidth requirements for the in- and output channels are to be considered.
Additional to the traditional KVM trinity access to the device changer (list of available floppy, optical media, tape images) depending on the platform emulated should be offered. Further on, traditional and new type of machine control for power on/off, reset, suspend/resume should be made available through the remote control system (to a certain, controlled degree). All this could be implemented depending on the target system the actual user is interacting with, like as overlay to the screen.
On the user's end the client should scale screen resolutions up (and down) to match them to the actual device's screen size. It should deal additionally with different aspect ratios. It could re-use unused screen parts by putting additional functionality their. Another solution would be an on screen overlay.
Different Technologies and Implementations
There are a couple of standards out there like VNC, RDP or Citrix and variants of it which allow remote access to whole systems with varying side channels for remote audio, block devices or means to control the machine otherwise. Plus, the new web 2.0 technology and new HTML5 web video/audio or Java script based remote access applications might become relevant for the outlined purpose.
Optimally the video/audio remote is directly available from the virtual machine/emulator making it applicable to a wide range of different clients. Examples for the VNC protocol are VMware, VirtualBox, QEMU and Dioscuri. VirtualBox offers RDP, too. Parallels offers a mobile access client to its desktop virtualization solutions. Unfortunately this is all not standardized and the capabilities differ.
If there are a couple of emulators required because of different hardware architectures, then they should agree on a common standard/API so that the remote access client on the user side can remain the same.
Distributed Architecture and Specialization
With the proposed standardized user interface all components of the original artefact remain within the service providers domain. Additionally, complex setups like networked computer games or linked business processes are possible. The network application (online game) consisting of a single or multiple servers and (many) clients connecting to this/these servers can run in a controlled environment that is shielded from the outside world. This would allow to create/maintain certain states of the business process, database or game (time running, before/after a "major event", whatever) and run/load them on demand just by firing up a certain suspended machine. The clients can get cloned on demand if a new instance is required for multiple user access. Both types of systems are run "in the cloud" operated by the service provider. The (original) network connections between the clients and the server are fully virtual and not connected to the outside world. There might be even additional machines in that virtual network if required, like e.g. a license servers, etc.
Different environments might get distributed over different service providers (institutions), which can either share the load or specialize in certain environments.
Dirk von Suchodoletz
June 28, 2012 @ 6:56 am CEST
The HTML5 standard is a major step into the direction to standardize streaming services for a wide range of clients and devices. Together with proxy tools like Guacamole, an open source project to provide an abstract interface to remote desktop protocols like VNC and RDP, the standardization of remote system access through HTML5 is advancing. The abstraction from the concrete client protocols avoids the need for additional data channels like audio, remote block device access or event recording back channels to be taken care of. This both can improve the security and privacy of remote access as well as simplifying the implementation for a wide range of (mobile) clients.