It would be impossible even to think of a picture archiving and communications system (PACS) as mission critical if the application service provider (ASP) infrastructure supporting it were not also mission critical.

ASP infrastructures today satisfy the requirements for mission criticality because their architects recognize that they are vulnerable to occasional breakage. Accordingly, in designing these infrastructures, information-technology professionals proceed not from the premise that a break is possible but, rather, from the view that such an event is inevitable.

At ComDisco, Rosemont, Ill, our response to the expectation of infrastructure disruption is to take the pipeline along which radiology images and data will flow and equip it with alternative paths, so that breakage, when it occurs, is entirely transparent to the user. Hence, in a mission-critical ASP PACS infrastructure, nothing ever goes down?at least not from the user’s perspective. The only time a user is aware that things are not working is when his or her own workstation malfunctions. A user’s workstation may fail, but that will not mean that data and applications have ceased being pushed outward from the ASP to that user. It simply means that the user needs to disconnect the bad workstation and plug into the infrastructure a functioning one, or move to another workstation, in order to be back in business immediately.


To the information-technology architect, ASP PACS infrastructure represents much more than copper wire, fiber optics, and servers, the components that allow one to reach out and touch the Internet. It is, instead, an entire environment that supports or complements a data center (or, possibly, a fully outsourced business). It is an environment that allows the customer to manage the migration of information in a very controlled and predictable manner. It is an environment that enables the customer to minimize risk.

Central to this infrastructure environment are middleware or toolware applications designed to monitor and understand the way in which PACS and its edge devices are interfacing and working with wide-area and global-area networks. This, in essence, is a coupling of infrastructures with infostructures. Permitted by this coupling, for example, is the ability of the ASP to monitor, evaluate, and predict information utilization prospectively. Such insights make it possible for the ASP to alert the customer when it becomes evident that information is not being used optimally. By making the customer aware of this suboptimal utilization, the ASP allows the customer to educate its users so as to bring information utilization to a higher level?and, thus, to set the stage for making improvements in outcomes, headway in the effort to control costs, and gains in attaining greater efficiency of work flow.

The infrastructure environment takes in the equipment, too. Consider the matter of servers. The ASP must be able to accommodate whatever data sets the customer wishes to transmit. As time goes by and system utilization increases, it is almost a certainty that those data sets will become larger and more complex. It is, therefore, imperative that the ASP build into the infrastructure, for peak usage and failure events, geographic load balancing procedures, which encompass both equipment and processes addressing sufficient speed and bandwidth to transport, amass, and disseminate growing data sets.

Storage and archiving are other equipment-related considerations. The ASP can be considered a primary and backup service or simply as a backup service. The customer may regard the service as a complement to its traditional on-site application hosting environment with immediate and near-line storage. Or it may be viewed as a stair-step service into the dissemination of information to its constituents, complemented by off-site archiving. There also are issues related to the length of time that information will be stored, and these will affect how storage is designed within the context of the infrastructure environment.

Another part of that environment is financing the technology. Proper economic support helps customers minimize their risk of being saddled with technology that is inadequate to its task. Some information-technology providers offer financing as a way of ensuring the viability of the ASP PACS infrastructure.

The ASP infrastructure environment also takes in the collaboration that must exist between the ASP and its suppliers of hardware, software, storage media, and telecommunications. In this collaboration, the PACS software supplier is the application: if you will, the letter A in ASP. The services (the letter S) are the network management, the server management, and the application servers. The letter P?the provider?provides the design, implementation, and support. The S and P parts are what we at ComDisco provide: technology solutions to allow the other application components to be rich and robust.

ASP history

It is important to understand that ASPs, outgrowths of remote application hosting, have been around for quite some time (at least since the early 1980s); as early as the evolution of a solution for disaster recovery/advance recovery. Early adopters of remote application hosting were the banking and airline industries. These industries managed information by installing mainframe computers in a central location and then acquiring and distributing the data to and from user terminals at various locations, near and far, through the use of telecommunications.

The advent of mid-range server platforms and personal computers in the mid 1980s made it possible to decentralize the management and storage of information and distribution of computing power. Decentralization created new hurdles for information technology and the software industry by fragmenting the control and distribution of databases and software licenses, hence outsourcing and data recovery centers became increasingly important.

Today, customers are rediscovering the merits of complementary remote hosting facilities and, with it, the concept of ASP or remote hosting. Health care providers with resource and economic contraints are contracting with service providers to reduce their exposure to technology obsolescence and information growth. The ability to amass digital assets or information at logically centralized, distributed locations, provides secure redundancy, long-term archiving services, and allows for accessibility by users anywhere, anytime. Service providers can complement health care providers’ current information technology and data center arrangements, or can be contracted to co-source or outsource services.

Concurrent with this evolution in the ASP model, there has been a change in the infrastructure landscape. The first infrastructures were intended to support ASP within autonomous, self-contained environments. Not long thereafter, though, came the advent of off-site data-recovery centers, which were structured to mimic the data center as it existed at the customer’s site. This made it feasible to take information and then port it from the customer’s site to the recovery center. At the time, this porting could only be accomplished via sneaker net, wherein copies of the customer’s information would be delivered on tape to the data-recovery center by a human courier.

Later came telecommunications-based networks. By this point, development of the infrastructures necessary to support the flow of information between customer and provider commenced. Because users were then able to port information virtually, the means had been acquired for the efficient transfer to the recovery center not only of data, but also of the customer’s system applications software.

Initially, the idea was that the data-recovery center would maintain copies of the data and applications so that, in the event of a system crash or other operational disruption, the customer would be able to dial up the recovery center and retrieve whatever sets of applications were needed to replace those that had been damaged or destroyed.

Unfortunately, the nature of telecommunications networks in those days was such that it could take extraordinarily lengthy periods for applications to be transferred back to the customer site. Many times, the transaction could not be completed in less than 3 to 6 days from the time of failure (hours of lost transactions) until the customer was back online and conducting business (hours required to resume business).

To circumvent this severe limitation, customers would dispatch key people from their information-technology or information-systems departments to the recovery center whenever a system failure occurred. At the recovery center, the customer’s personnel would set up shop and, essentially, conduct business from that location, making it possible for the customer’s information-dependent activities to proceed while repairs were being made to the downed system back home.

The introduction of T1 lines and frame relays helped the data-recovery centers evolve into information centers. These information centers served a dual purpose. First, they were secure warehouses for the deposit and storage of customers’ data. Second, they functioned as hosts for customers’ applications.

Today, both the systems and infrastructure have become sophisticated enough to permit these centers to push data and images easily to destinations all along the information superhighway. Because of this, radiologists can now access information almost anywhere?at home, in the car, and at the airport?from fully portable devices such as laptops and personal digital assistants.


Now that information can travel unencumbered, an issue arises: who actually owns the data at the heart of all this distribution? Is it the patient, the provider, the payor, or someone else entirely? This is a question that has been argued extensively. Perhaps it is best resolved through analogy with the simple act of buying a cup of coffee at the corner lunch counter. In that scenario, the buyer walks into the establishment, places an order, pulls money from a pocket to pay for the coffee, receives the coffee, and sits at a table to consume the coffee. What allows the purchaser to enjoy this moment is a particular tool (in this case, currency).

In health care, much the same thing transpires. A patient breaks a bone and comes into an emergency department to have that injury treated. Patient data are collected and diagnostic images are made. The medical provider uses this information as a form of currency to complete the transaction, which is the delivery of care. In other words, the data are not something anyone owns, any more than anyone owns money. Data are simply currency put into circulation and used to allow business to be conducted.

Information centers serve as brokers in this exchange, much as banks do when it comes to the transfer of cash from one party to another. The broker?that is, the ASP?acts as a repository for the movement of health care currency worldwide and relies on an infrastructure that permits information to be warehoused so that information can be distributed globally.

With information distributed to a multitude of sites and users, however, the need to build stronger security into the infrastructure is plain. Thanks to the Health Insurance Portability and Accountability Act, information is today traversing many virtual networks, which makes it vulnerable to interception by hackers. To remedy this, protocols defining the manner in which information flows across the superhighway have been developed and are being enhanced constantly. One of these protocols entails encryption of information from the sender so that only authorized recipients in possession of the decryption key can make sense of it.

Another issue that prospective ASP customers often misunderstand is what happens to existing in-house infrastructure components when the institution signs up to use an ASP. Many are under the impression that their installed servers will be taken away and that all information processing will then take place at the service provider’s data center. That is, of course, a possibility, depending on the particular ASP selected, but removal of all existing servers is not recommended. Some servers need to remain in place at the customer’s site in order to achieve a balance between those operations that are properly based in-house and those that can be suitably based outside.


A well- planned and designed stair-step approach to ASP PACS can minimize financial and technological risk and augment a customer’s existing infrastructure environment, allowing for predictable, controlled deployment of information management throughout the enterprise. In so doing, it is far more likely that the customer will be able to function in an infrastructure environment truly able to deliver information at the time when it is most needed, in the way that it is most desired, and in the place where it is most valuable. This is what we mean when we say that ASP PACS infrastructures today are mission critical.

Rick Mancilla is vice president and chief technology architect, ComDisco Healthcare Group, Rosemont, Ill.

Related Links: