Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

It is the technical workgroup's recommendation to keep all systems that are dependent on Rice located in the same physical location. As the Kuali Rice framework matures the database depenedey dependency is expected to become less of a critical issue and then distributing the appliactions and RIce infrastructure may be technically fesable.

...

Kuali Rice applications have three main depencenies on central infrastructure:

  1. Central Authentication:
    All client applications will rely on the UC Davis Central Authentication System (CAS). The connection to this service does not require large amounts of bandwidth and latency is not a large concern, but a reliable connection is required.
  2. Kuali Service Bus:
    The Kuali Service Bus will utilize bandwidth, but properly architected services will not be largely effected by network latency. An application should make calls to somewhat course grained services that may require multiple internal transactions to be executed by the service provider and then return the results to the client.
  3. Kuali Rice database connection:
    The Kuali Rice database connection from client applications to the central Rice database is highly dependent on a low latency network connection to ensure a high level of performance. See Kuali KEW Architecture below.
Kuali KEW Architecture

The current architecture for running the embedded workflow engine within a Rice application requires direct database access to the central Rice database. This architecture is currently in place for two main reasons:

  1. Performance
    1. If a client application ran the entire KEW remotely over the KSB there are known performance issues with large numbers of KEW transactions. The work around to this problem is to allow the client application direct access to the KEW database which allows the client application to perform at acceptable levels.
  2. Transactional Integrity
    1. If a client application ran KEW remotely over the KSB they would not have the ability to rollback client transactions if they failed remotely on the central KEW server. This issue could be resolved if a solid open-source implementation of WS-AtomicTransaction became available or if we replaced the KSB implementation with a commercial implementation that supported WS-AtomicTransaction.

Physical Data Center Issues to consider

Whenever services are located remotely there are a number of standard data center issues that must be addressed. These are independent of Kuali services, but they'd need to be thought about with any remotely hosted Kuali services.

Cost

The UC Davis San Diego Supercomputer Center (SDSC) Colocation Feasibility Study Report found that when amortized over 15 years the cost of co-locating at San Diego vs. building additional space on campus were nearly identical.

Performance / Reliability of connection to remote location

We have a 1GB network connection to San Diego for our administrative traffic. The remote locations would need to clarify what uptime requirements were required which would drive how we'd architect our network connectivity. For instance, we'd need to further investigate network redundancy to ensure a reliable connection. A backup connection could be purchased to reduce the risk of network interruptions.

...

Remote staffing

San Diego's service provides 'remote hands' that would rack servers, reboot systems, etc. Whatever co-location facility we'd use would need to provide this service since our system administrators and operators would no longer have physical access to the servers.

...

Recommendation

In short, there are technical application reasons why all Kuali applications and Kuali Rice services be hosted in the same location. There are no technical data center issues with co-locating services remotely as long as the service owners are aware of the risks and work to mitigate them where possible.Once the infrastructure for remote hosting is available, as new services are brought online hardware could be purchased and shipped directly to the remote location. Overtime the proportion of services at the remote location would grow while locally hosted services would shrink.