Our Partners close more business.

Use these powerful resources to win more business, faster, with less effort.  
Call 877-411-2220 x121 for personal support with any opportunity.

RESET SEARCH

Hosting Quote Estimator

GET a FREE Sandbox or Trial Environment NOW

How To Use This Tool:  

To find answers to common RFP and RFI questions, select a tag, or, search for terms like "security", "performance", etc.  You will find common questions and answers grouped together in one record.  Follow the tag links to refine your search.  Supporting downloads and documentation are available, below.

Please login to obtain download access to additional supporting documentation.  Registered users can also contribute to the database.  You can request access by Contacting Us.

© Omegabit LLC, 2023

Enter a Search Phrase or Select a Tag

Hosting Provider - About Operations and Host infrastructure

Q:

Provide documentation confirming the ability of the proposes software to meet all mandatory technical and functional requirements as specified in the attached requirements document. Please attach a copy of the Appendix I Technical and Functional Requirements, indicating the ability to meet listed requirements in the appropriate column.


A:

Omegabit Liferay Enterprise Portal Hosting

Omegabit is a full service managed services hosting provider that specializes in Liferay runtime operations, optimization, and support.  It has established an excellent reputation for quality of service as the longest-operating and only dedicated Liferay Hosting Services Provider, Certified Partner and Digital Experience Platform Reseller.  Omegabit provides excellent out-of-the box solutions for any Liferay application, and is able to build to suit to meet any special requirements.

 

Omegabit brings a wealth of experience in terms of operational support, long-term infrastructure administration and maintenance, with special emphasis on Liferay operations.  It will be a key participant in the design and implementation of the customer infrastructure in collaboration with your development team.

 

Omegabit services many government and related agencies presently, including the City of San Francisco 311 Project, the Chicago Metro Agency for Planning, and Alignment Nashville (USA), as a few notable examples. Omegabit's highly experienced professional team is able to help inform important decisions relating to the long term stability, performance, and cost efficiency of operations for your site infrastructure.  Omegabit is an excellent complement to your development team and is included in this proposal as the recommended supplier of Liferay Digital Experience Platform Hosting and Operational Support services.

 

Omegabit Liferay Enterprise Portal Hosting Features:

  • World-class Liferay tuned and optimized private cloud infrastructure with extensive security features and SOC2 compliance
  • Customers operate from highly secure private VM VLAN "private cloud space" behind advanced security protection including Active Intrustion Protection, DDOS protection, optional PCI export filtering, security AI huristics (zero day quarnateen), plus proactive security-aware environment configuration and maintenance.
  • Liferay Trained and Certified administration and engineering experts on staff
  • Seasoned professionals with a deep background of relevant experience in .gov, .com and enterprise portal and Liferay infrastructure
  • Longest operating Liferay Certified Hosting Partner in existence
  • Most Managed Liferay Installations of Any Hosting Provider
  • Only Hosting Provider Offering Comprehensive Support Throughout the Liferay Application Layer
  • Preferred by Liferay Certified Developer Partners for Managed Services
  • All US-based operations and administration with HQ in California
  • Able to provide advanced infrastructure
  • Direct administration and control by Omegabit's US-based and Liferay certified personnel
  • Modern HP based high performance enterprise class servers with high reserve limits enforced (2.3Ghz+)
  • Highly redundant physical infrastructure
  • VMWare 5 Enterprise Plus capability including HA and FT capability
  • Ability to scale to very large cpu and memory footprints (+36 cores; +256GB RAM)
  • High Performance SSD Accelerated RAID SAN Storage and Network paths
  • Active intrusion detection and prevention firewalls
  • 10Gb infrastructure and Internet connectivity
  • Fully Liferay cluster aware and capable Infrastructure (with deep tuning and troubleshooting experience)
  • VM-image level backups and snapshots for comprehensive recovery
  • VPN Connectivity for backoffice operations
 

Primary: Omegabit Operations Center (NOC)

Digital West Networks Inc., San Luis Obispo, CA

 

Secondary POPs:

One Wilshire, Los Angeles, CA (primary DR)

Equinix, San Jose, CA (secondary DR)

Equinix, New York City, NY (POP)



No hay ningún comentario aún. Sea usted el primero.

Hosting Provider Location and Infrastructure

Q:

Who is the hosting provider and where are their data centers located? Please indicate which locations are primary and which are secondary.


A:

Primary: Omegabit Operations Center (NOC):

Digital West Networks Inc., San Luis Obispo, CA
3620 Sacramento Drive, #102
San Luis Obispo, CA 93401

Primary Remote Archive & DR Location:

Digital West Inc. 
@One Wilshire
624 S. Grand Ave., Suite 1010
Los Angeles, CA 90017

Secondary POPs Available:
Digital West Inc.
@Equinix
11 Great Oaks Blvd
San Jose, CA 95119

Digital West Inc.
@Equinix
275 Hartz Way
Secaucus, NJ 07094

Digital West Inc.
@Equinix
47 Bourke Road
Sydney, Australia 2015

Digital West Inc.
@Green Datacenter AG
Industriestrasse 33
5242 Lupfig
Switzerland
 

Please describe your technology platform including hardware, software, operating systems and storage.

Omegabit Operating Infrastructure Summary:
Omegabit operates a private, Liferay optimized cloud infrastructure design specifically for fault tolerance, scale, and efficiency from world-class collocations facilities in the United States.  The primary data center is in close regional proximity to our business and operations headquarters in California.
 
Omegabit infrastructure features:

● Direct administration and control by Omegabit's US-based and Liferay certified personnel
● Modern HP based high performance enterprise class servers with high reserve limits enforced (2.3Ghz+)
● Highly redundant physical infrastructure
● VMWare 5 Enterprise Plus capability including HA and FT capability
● Ability to scale to very large cpu and memory footprints (+36 cores; +256GB RAM)
● High Performance SSD Accelerated RAID SAN Storage and Network paths
● Active intrusion detection and prevention firewalls
● 10Gb infrastructure and Internet connectivity
● Fully Liferay cluster aware and capable Infrastructure (with deep tuning and troubleshooting experience)
● VM-image level backups and snapshots for comprehensive recovery
● VPN Connectivity for backoffice operations

 



No hay ningún comentario aún. Sea usted el primero.

Data Architecture in Shared Hosting Environment

Q:

What type of data architecture is implemented?

How is data security managed in the shared environment? What controls are in place?

If the environment is shared, how are the data segregated from other shared environments?

Will our solution be hosted in a dedicated or shared environment?

For any hosted offerings, would the client use your product on a dedicated or shared environment? Is there an option to choose?


A:

Omegabit directly operates a private VMWare based cloud infrastructure that is purpose built for Liferay secure operations.  Omegabit directly owns and manages all computing layers including edge routers and firewalls, servers, storage, and interconnecting equipment at each physical hosting location and relies on Digital West and its facilities providers for secure physical plant operations, redundant power, cooling, private redundant private cross-POP interconnects, and Internet connectivity.

All environments are provisioned within a firewall protected private VLAN that is exclusive to each customer's specific purpose.  Only public facing services are exposed via the firewall.  Customers may only access and control applications and data located within their respective private cloud, only.  

Common SAN storage is utilized at the abstracted VMWare layer and is completely isolated from customer access.  Encryption at rest is available.

All customer facing virtual machines, storage, access and network paths are exclusive to the use of that specific customer.

Omegabit uses industry leading VMWare based storage and virtualization technology combined with enterprise-class servers, storage, and network infrastructure to provide Liferay-optimized host environments. All servers and virtual host environments are fully patched and protected against Meltdown/Spectre and similar virtualization exploits.  Omegabit also operate 100% AMD chipset based server infrastructure, which is inherently more secure.

For a comprehensive explanation of VMWare based infrastructure please see:

http://www.vmware.com/pdf/vi_architecture_wp.pdf                                                                                                                                                                                

The proposed solution is based upon standard Liferay reference architecture optimized for the stated use case and cost efficiency. 

 

Omegabit is able to supply an always-on VPN connection that can support secure back-channel links to core infrastructure (e.g. system of record, SSO or directory services, e-commerce transaction processing) over a dedicated BOVPN link.

Omegabit is also able to support special security rules and configurations at the Firewall and Apache rules layers, which can be used to enforce specific client/destination restrictions (as a complement to Liferay logic).

Please see the supplied addendum "Third Party Privacy-Security Questionnaire" for a detailed explanation of Omegabit security features, controls, and options.

Data is segregated at the virtual machine disk image level. All control is limited exclusively to Omegabit authorized administrative personnel.

From the CLIENT perspective, all environments are dedicated for its sole purpose.  We operate a secure, private cloud infrastructure that runs on top of large-scale enterprise class servers and high performance SAN storage, which are clustered and shared collectively across our tenant installations using VMWare technology.  This provides more flexibility, scalability, and performance-on-demand as compared to dedicated physical hardware and is preferred for these reasons. 

All resources reservations are guaranteed.  Omegabit follows strict environment isolation, discrete configuration, and data management practices to ensure separation between hosted environments, and is PCI-I, HIPAA/FERPA, FEDRAMP compatible.

 

We are able to accommodate private dedicated host infrastructure but do recommend leveraging our secure, already redundant, and Liferay-optimized cloud infrastructure for the best balance of cost, performance, resilience and manageability.  

We build to suit and are happy to accommodate any special requirement in this regard.  However, building a similarly capable dedicated infrastructure specific to Babson many have a substantial impact to cost.



No hay ningún comentario aún. Sea usted el primero.

Hosting Provider - High Availability

Q:

High Availability: Incoming HTTP, HTTPS, as well as other configured protocols, will be handled by either shared or dedicated load balancers.

HA seems to be covered by ESXi features, HA/FT, but is there anything setup inside the OS/application?


A:

Omegabit offers a completely HA infrastructure with no single-point-of-failure in the chain systems.  And utilizes the latest VMWare, Nimble Storage, and high-speed network architecture available to ensure no bottlenecks to systems operations.  

The proposed solution also includes a fully clustered Liferay application, which provides additional redundancy at the critical logical layers of the software.  As with most clouds, all servers are redundant and software will automatically fail to another node in the event of a physical system failure. Moreover, Omegabit is able to provide additional "hot" fault-tolerance as an optional feature.

Omegabit also operates extensive backups including on-site runtime snapshots every 2hrs and continuous "hot" offsite archives to a secondary DR location.

Liferay can be clustered for HA at the application (JVM) level.  In that case we also ensure that your nodes are distributed across separate physical hardware.  This does provide the convenience of hot-redundancy at the application server layer, and the ability to support things like rotating outages for Liferay.  It does increase the cost both in terms of infrastructure and DXP licensing (2x, typically).  

For true HA, it would also be appropriate to cluster the search services (Elastic), although, they are less prone to breakage and change, relative to the app servers. And, in a perfect world, the DB as well.  However, that is more complicated:  MySQL does not cluster well; there are alternate strategies avaialbe (including hot-HA at the VMWare/cloud layer).  That said, the DB and search are typically the most stable layers and are least likely to benefit from HA in terms of practical ROI and actual risk vs. cost.  The alternative option is to employ a commercial RDBMS like Oracle in a cluster, which has licensing implications, but we are happy to host/accommodate.  If this is a near term concern, let's have a call to discuss options.  If this is a hypothetical, the answer is, yes, we can absolutely support it with enough resources in play and assuming it is cost-justified to the business. 



No hay ningún comentario aún. Sea usted el primero.

Scalability

Q:

Please describe your ability to scale your processing capacity. How long does it take to implement more processing capacity?

Can you share any information on application scalability?

Is there a breaking point where a new server is required?


A:

All scalable resources are immediately available on-tap. Omegabit's private, Liferay optimized cloud infrastructure is able to add or redistribute additional resources including CPU, RAM, and storage on-demand, and to help to assess where allocations can be of benefit under specific load conditions. All resources can be added or removed in any increment on a month-to-month basis. Complete additional server nodes including OS and application configuration can typically be provisioned in 24-48 hours. The preference is always to preempt surprise increases to capacity through well designed testing and optimization tuning.

 

Unfortunately these are loaded questions.  The variables include whether the users are logging in, what the pages contain and how they are implemented, and the workflow.  Relative to other platforms, Liferay scales extremely well, assuming it has been properly implemented:  for example, it is not enough to write a Java application and wrap it in Liferay for devilry; the APIs must be used for data and web services to gain the benefits of Liferay caching and optimization.  We've seen well built apps scale incredibly well, and poorly built ones fall apart with very few users.

I have attached a Liferay white paper that explains the scale of the platform, OOTB, under specific (highly optimized, obviously), use case conditions.  The catch, of course, is that there are no custom plugins or workflow, and they were selective in implementing the features in a manner that is least expensive.  

Liferay claims:
• To help accurately demonstrate "enterprise scale," this study was commissioned with:
    • 1 million total users
    • 2 million documents with an average of 100KB per document
    • 10,000 sites with 50% of the sites having at least 5 children
    • 4 million message forum threads and posts
    • 100,000 blog entries and 1 million comments
    • 100,000 wiki pages

The key findings of the study are:
    • 1  As an infrastructure portal, Liferay Digital Enterprise can support over 36,250 virtual users on a single server with mean login times under 378 ms and maximum throughput of 1020+ logins per second
    • 2  The platform's Document Repository easily supports over 18,000 virtual users while accessing 2 million documents in the document repository
    • 3  The platform's WCM scales to beyond 300,000 virtual users on a single Liferay Digital Enterprise server with average transaction times under 50ms and 35% CPU utilization.
    • 4  In collaboration and social networking scenarios, each physical server supports over 8,000 virtual concurrent users at average transaction times of under 800ms
    • 5  Given sufficient database resources and efficient load balancing, Liferay Digital Enterprise can scale linearly as one adds additional servers to a cluster. With a properly configured system, by doubling the number of Liferay Digital Enterprise servers, you will double the maximum number of supported virtual user

Note that their infrastructure is based on bare metal servers running at clock speeds that are more typical of desktop systems:

1Web Server
    • 1 x Intel Core i7-3770 3.40GHz CPU, 8MB L2 cache (4 core, 8 HT core)
    • 16GB memory
2 Application Server
    • 2 Intel Xeon E5-2643 v4 3.40GHz CPU, 20MB L2 cache (6 core, 12 HT core)
    • 64GB memory, 2 x 300GB 15k RPM SCSI
3 Database Tier
    • 2 Intel Xeon E5-2643 v4 3.40GHz CPU, 20MB L2 cache (6 core, 12 HT core)
    • 64GB memory, 4 x 146GB 15k RPM SCSI

All respect to Liferay, this is not realistic.  The actual capacity of most production systems is about 1 order of magnitude lower based on our real-world observations of portal implementations of all shapes and sizes.  It is feasible to reach this level of capacity under very narrow conditions.  Customizations can have much impact.  And, real-world implementations are typically much more transactionally expensive than this test case for practical reasons (look, usability, content value, etc.).  Caching and optimization aside, if your page is 5x more expensive to render, you can expect a 5x impact on this economy of scale.  In the real world that is frequently the case.

Our experience tells us that in most cases you can plan for about about 3.5-4K concurrent and active authenticated sessions per 8-core app server under heavy use, while maintaining a good page-paint time of 3-5s (that is, users that are logged in and actively clicking; you can have many more inactive authenticated users with no performance impact given enough memory) —given sufficient db and support infrastructure.  In some use cases, it may actually be much better.  Also, Liferay tolerates overloads extremely well.  So, more users will typically mean longer page load times but still be able to handle the demand.  We can accommodate custom bare-metal setups, but, they are not economical and do not have the same benefits as the cloud severs, which more typically run @2.4-2.6Ghz.  That said, our 8-core AMD based servers are typically faster, compared to the 6-core HT servers Liferay has used for their bench.  We get similar or in dome cases higher economies of scale due to thread concurrency.

Your application use case - at least the one we have discussed so far, is very simple and I expect that you will see the upper-end of the scale in terms of achieving Liferay's "best speed", assuming good application design practices.  I do believe that you will need to add additional CPUs and eventually break out services in a more horizontal configuration to accommodate high concurrent demand.

What we are prospering as a starting point for you is intended to handle hundreds of concurrent public/unauthenticated users and anywhere from tens-to-hundreds of authenticated concurrent users depending on use-case (this is a partially-qualified SWAG, at best).  Scale can be achieved by both adding CPU (and sometimes RAM) resources, and also by breaking out the services to separate hosts (app, db, search, web acceleration).

For what it is worth, we are here to help and are more well equipped to squeeze the most out of your infrastructure and to help you make informed decisions as to how to scale.



No hay ningún comentario aún. Sea usted el primero.

Systems Maintenance - Server Hardening

Q:

Server Hardening

What OS hardening has been done to the system?


A:

Infrastructure hardening is extensive, and occurs at many levels of the hardware/software/network stack. This is documented in the Omegabit Internal Operations Wiki, and Client Wikis, where applicable and is private to each specific Client. Details are typically summarized in a policy statement supplied by Omegabit. compliant tennant.

 

All layers of the infrastructure are continuously hardened against evolving threats (Firewalls, VMWare, Storage, etc.).  Firewalls are updated hourly against an live DB of known threats.  We can optionally enable zero-day quarantine and Data Loss Protection filtering (they have some performance tradeoffs but are available to you if desirable).  Your provisioned infrastructure operates in a private VLAN "bubble" that is completely locked down.  Only SSH and HTTP/S are exposed to the Internet by default and we can restrict access to any service at the firewall on request.  

The OS VM containers that are provided are also pre-hardened and patched to the latest OS release on delivery.  Only necessary services are installed/activated.  All passwords are set (strong), and admin access is limited where applicable (e.g., root can only connect to MySQL from localhost by default).  We do not run OS level firewall services by default, except where applicable for special configuration.  However, they can be enabled if desirable (we recommend not, for best performance and given the nature of the isolated infrastructure and the hardware firewalls in-front; the VLAN is trusted).  All servers also actively watch/respond to intrusions using fail2ban to watch all connected services for brute-force and DDOS attacks.  Strong passwords are enabled and configured by default.  We do expect your team to maintain their own passwords.  However, we can enable password change restrictions to the OS on request.  We typically defer to you to set the security in Liferay as you require, but can certainly advise on best practices and how to use the Liferay password controls.  Because manning the systems is a joint responsibility, and both teams have access, we are continuously looking for changes that may imply risk and will advise.  E.g., perms changes, or, if the dev team decides to install new services, etc.  Our expectation is that you keep us informed of changes that occur outside of our control, so that we don't step on the efforts and can advise on any potential impact (security, or otherwise).  If you would like a more formal and automated means of documenting this, we strongly recommend considering a subscription to Dynatrace SaaS, which provides audible recording of environment changes as well as a fantastic set of performance analysis tools for your custom application.  Let me know if this is of interest and we can discuss in more detail.  



No hay ningún comentario aún. Sea usted el primero.

Relocating an existing database

Q:

If we decide to move the database to its own server, will that require downtime?


A:

Yes, the downtime would be the time it takes to clone the DB itself over to the new target host, which would be pre-staged.  Depending on the size, this usually takes anywhere from 10–30 minutes.  We can certainly arrange for that during a quite period of operation.  And we will always work to coordinate these changes in a manner that his last impactful to your operations.



No hay ningún comentario aún. Sea usted el primero.

Physical and Facilities Security and Access

Q:

Do you have documented physical security policy and procedures?

Do you have a process that restricts and maintains access to information facilities (data centers, computer rooms, computer/network labs, and telecommunication closets), and areas with Federal Reserve information to authorized personnel only?

Are access lists and authorization credentials reviewed at least annually?

Do you authenticate visitors before allowing access to facilities that are not designated as public access?

Do you have controlled entry points that use physical access devices and/or guards to facilities?

Do you change facility keys and combinations upon the lost, compromise, or individual transfer or termination?

Do you monitor physical access to facilities with real-time physical intrusion alarms and surveillance equipment?

Are visitors to the facilities logged, escorted and their activities monitored?

Do the facilities provide emergency power shutoff with switches or devices in locations where concentrations of information systems exists?

Do the facilities incorporate an uninterruptible and alternate power supply to protect against a short-term and long-term loss of primary power source?

Do the facilities have fire detection and fire suppression devices that activate automatically and notify emergency responders in the event of a fire?

Do the facilities employ automated mechanisms to monitor and maintain temperature and humidity level?

Do the facilities protect the information systems from damage resulting from water leakage by providing master shutoff valves that are accessible, working properly and known to key personnel?

Do you have formal procedures to ensure access privileges are reviewed on a periodic basis?

Describe the logical and physical security of your hosting facility.


A:

All facilities feature:

  • 24x7 on-location staffing and site access control, CCTV surveillance
  • Secure ID+Biometric access control to sensitive areas, mantraps
  • Locked Cage, Cabinet infrastructure exclusive to Omegabit host operations
  • Omegabit owns/manages all private cloud infrastructure from the public edge/redundant public interconnects
  • All data is encrypted in transit between secure endpoints
  • All Client traffic is exclusive to Client operations
  • Customer datastores are exclusive to each Client and completely isolated

ref: Soc 2 Type II Facilities Compliance Report for Omegabit colocation' facilities managed by Digital West and alternate providers (available on request).

None; all data and storage is maintained and operated exclusively by Omegabit and specially authorized and trained personnel with special awareness for Liferay operations. No proposed services, or facilities in this proposal are to be outsourced to an additional third party and will be satisfied exclusively by Liferay and Omegabit and its affiliated facilities partners, where named.

Yes.

Current DL or Passport, or Government Issued ID.

Includes Biometric+Key pad, Mantraps, and human verification at all points of entry at all times; exclusive access to private locked cabinets.

Yes; all relevant access is immediately rekeyed and electronically controlled.

Yes.

Yes.

Yes.

(all locations) Commercial rack infrastructure mainline UPS (APC), Private Emergency Generator, 100% operating capacity; emergency pre scheduled and guaranteed fuel delivery for extended outages; regular testing and maintenance, redundant power paths to host infrastructure; locations immune to rolling outages.

Dual-interlock, dry-pipe pre-action fire suppression system.

Yes.

Yes.

Not ad-hoc, but needs basis.

 



Añadir comentario
Publicado el día 18/06/21 9:00.

Account Access approval

Q:

Do administration/privileged access accounts require additional approval by appropriate personnel (e.g., system owner, business owner, chief information security officer)?


A:

Yes.  As it relates to backend access, Clients may designate authorized approvers and any required workflow, e.g., validation from an independent Client Security Team, for approval. Access is only provided where explicitly  requested/approved, and access is strictly limited on a needs basis. Omegabit will recommend and follow best-practices but defer to the Client on the preferred method of approval and determining what level of access is appropriate for its administrative users.  As it relates to front-end (portal UI) access and control. This is typically under the direct management of the Client at implementation and can vary based on the desired workflow and use-case.  Omegabit is able to advise Clients on the use of Liferay access and permissions controls, and other considerations relating to PCI and similar compliance; e.g., encryption of designated data within the Liferay application database.  These options are available to Clients on request and are typically determined in collaboration with Client engineering teams at the time of the application design.

The details of the approval process are established at onboarding time and implemented as part of Omegabit's customer management workflow to help ensure quality of service for any/all requests.

 

Configured per Customer Operations Policy and SLA terms.



No hay ningún comentario aún. Sea usted el primero.

Personnel & Contractors - Termination & Transfer

Q:

Do you have Termination and Transfer Policy?

Upon termination of an employee or contractor, do you immediately terminate access to systems, and retrieve all company assets (i.e. equipments/devices, PCs, access cards, keys, smart cards, tokens, cell phones, information and documentation)?

Upon the transfer of an employee or contractor, do you review the logical and physical access authorizations to verify that the authorizations are still appropriate?

Upon the transfer of an employee or contractor, do you review the logical and physical access authorizations to verify that the authorizations are still appropriate?


A:

Yes; this is strictly enforced as a key component of Omegabit's secure operations.

Upon termination of an employee or contractor, Omegabit immediately terminates access to systems, networks, infrastructure –virtual and real-, and retrieves all company assets (i.e. equipment/devices, PCs, access cards, keys, smart cards, tokens, cell phones, information and documentation).

This is documented in Omegabit Employee Handbook, Section 4.7

Employee handbook internal documentation and HR procedures; includes proprietary actions and is sensitive in nature.

Yes.

Yes



No hay ningún comentario aún. Sea usted el primero.

Access Control

Q:

Do you have a process that authorizes and maintains a list of authorized personnel, consultants and vendor for maintenance activities? If yes, do you grant temporary credentials for one-time use or a very limited time period?

Do you allow non-local maintenance? If yes, do you employ multi-factor authentication for all sessions and network connections, and terminate connection once completed?


A:

Database, search and other ancillary services operating within the Client private infrastructure are exclusive to the use of the Client and are not shared with any other user, Client, or application except where explicitly intended by the Client application design. All databse services access is restricted by firewall, connecting client IP, unique users id, view restrictions, and strong passwords. Omegabit will implement the most secure (off before on) style of access control by default, and coordinate with the Client to make informed, security-aware changes where required for the operation of the hosted application.

Access of this nature is always chaperoned.

All administration links require two-token VPN linked authentication (pass+comlex trust key), or SSH tunnel, plus single factor authentication for console access, and additional secondary authentication for privileged access, by default. All restrictions and controls are configurable per Client requirements. Strong (15-char, complex), and unique passwords are employed, always. Optional Google two-token public authentication, digital certificates and personal keys are also supported on request. Hardware based two-token authentigtation integration for Client systems is also supported as a customization.

 



No hay ningún comentario aún. Sea usted el primero.

SDLC Change Management

Q:

Describe the SDLC/Program Change controls for application changes, system software changes and hardware changes, including vendor management approval and testing of changes.

How do you handle change management?


A:

Omegabit does not determine the SDLC process preferred by the Client, but is able to support specific needs relating to approval, push assistance, automation, etc.  

Omegabit's seasoned Professional Services group can also serve as an extension of your development and administration teams to help with ongoing change management, optimization, security, and other critical lifecycle maintenance.

Generally speaking an authorized Client representative will use the Omegabit support ticketing system to submit a request, track its approval, and get updates on the status and outcome.  With the exception of critical emergencies with eminent security risk, or in the case of a known fault remedy, Omegabit will always coordinate with the Client on change management planning, procedures, and scheduling before proceeding with modifications to the environment.

  1. Enter a ticket request
  2. Ticket will follow desired approver workflow as stipulated by Client
  3. Ticket is updated with status and information during processing and stored for historical reference for closed items.



No hay ningún comentario aún. Sea usted el primero.

Fault Tolerance

Q:

Describe how system/application redundancy and data mirroring are performed and where.


A:

All layers of the infrastructure are fully redundant and fault tolerant as is typical of any cloud infrastructure, including but not limited to:  power, cooling, Internet connectivity, operational connectivity, physical servers, switches, network paths, virtualization containers, virtual machines. Optionally, Clients may elect for a clustered and/or high-availability configuration, which can provide additional runtime failure protection, "hot" fault protection for SPOF of the application software infrastructure, and the ability to perform rotating outages without impact to production operations.  

 

Most failure scenarios are handled automatically and may be transparent, or, necessitate and restart of the affected service.

 

If a physical server fails, the vhost will automatically restart on another server and automatically rejoin service (minutes, typically).  



No hay ningún comentario aún. Sea usted el primero.

Backups

Q:

When do you backup?

How often do you backup?

Do you conduct backups of user-level information, system-level information and information system documentation including security-related documentation; and protects the confidentiality, integrity, and availability of backup information at storage locations?

Are the backup and restore process and procedures reviewed, and backed up information validated at least annually?

What is the backup schedule and retention on these systems?

If there is an issue, what is the process for a restore?

Can you elaborate on the offsite archive?

RPO/RTO expectations and testing schedule?


A:

 

In the case of most failures Omegabit features full redundancy and fault tolerance at the primary host facility as a function of the private cloud infrastructure.  Full Disaster Recovery is only initiated in the event of a catastrophic facilities failure.

In the event of a catastrophic failure of the physical plant, Client services will failover to one of our secondary DR locations.  Omegabit has the ability to backhaul traffic between private NOCs/POPs. Or, route directly from the DR location.  And, depending on the nature of the failure, can activate BGP to re-route public IPs.

A DR process TOC is available for review on request (much of it is redacted for security).  In the case of most failures, Omegabit provides full redundancy at the primary host facility.  

The standard SLA terms apply. The formal promises for critical faults are (summarized - see SLA for more details):

An initial response time within 2 hours is promised for Severity I issues, and 4 hours for Severity II issues, regardless of notification by automated alert or customer contact.
(actual response time is typically <15minutes for critical issues)

For non-catastrophic events, e.g. equipment or primary storage failure, an RTO not to exceed 12 hours is promised, with an RPO not to exceed 24 hours[1]. 

[1] Assumes worst-case; in a typical failure scenario our redundant cloud infrastructure can tolerate a failure, e.g. a server node, switch path, or disk failure, transparently, or with minor administrative intervention, and recovery in <1hr with no loss of data.

For catastrophic events requiring comprehensive relocation of service to a separate hosting facility, an RTO not to exceed 48 hours is promised with and RPO not to exceed 2 weeks[2] (15-days).
[2] Special terms and retention policies are available on request.  Assumes worst-case disaster recovery scenario from offsite archives; the RPO in this "catastrophic" scenario is more typically <48hrs, from near-line backup.

Please see the supplied copy of the SOW and the sections on backups and Support Ticket and Escalation Procedures for more details. 

This is what is promised OOTB.  Omegabit can accommodate any additional requirements around these expectations as a special request - including hot DR failover.  But, substantial additional costs will apply for both DXP licensing and infrastructure.  

What is offered OOTB is typically the best balance of cost and protection, practically speaking.  If you require more, we'll support it.


Please see the attached copy of the SOW and the sections on backups and Support Ticket and Escalation Procedures for more details. 

This is what is promised OOTB.  We can accommodate any additional requirements around these expectations as a special request - including hot DR failover.  But, substantial additional costs will apply for both DXP licensing and infrastructure.  What we offer OOTB is typically the best balance of cost and protection, practically speaking.  If you require more, we'll support it.

summary:

Backups snapshots of the entire VM stack are performed every 2hrs, and the offsite archives of those backups are continuous to a second remote physical location.  Retention for 2hr snaps for 48hrs, dailys for 30 days, and weeklys for 16 weeks.  We can accommodate longer retention if necessary.  Some of these retention policies impact RPO.  For PCI, you may want logs to last up to 1yr but, that can be accomplished through application design or by depending on our backups.  We recommend using both strategies depending on your reporting needs.

For PCI, you may want logs to last up to 1yr but, that can be accomplished through application design or by depending on our backups.  We recommend using both strategies depending on your reporting needs.

Backups should be considered for disaster recovery purposes only.  Our retention policy is variable and based upon data volume.  Depending upon the environment, rollbacks to the previous day, several days, weeks are available, but with sporadic snapshots between periods.  Therefore, a specific point-in-time recovery may not be possible.  We are typically able to restore backward up to several weeks depending upon the total size of your store.

Backups are automated using a combination of VMWare and Nible Storage technolgies. 

Backups are comprehensive and cover all aspects of internal and Client operations using a  vm snapshot based approach for rapid, transparent backup and recovery.

Backup and recovery procedures are exercised several times per month as a function of normal operations and to support snapshot and rollbacks for customers that are part of their normal development activities.

 

If there is an issue, what is the process for a restore?

We can restore any VM snapshot on file in a matter of minutes (it usually takes about 20 minutes to mount the backup partition and re-fire the image).  Recovering items inside that image is a matter of logging in and parsing the necessary data (files, db backups, etc.).  Typically, if there is a need to restore, we will recover all the dependent VM nodes and simply restart them.  In most cases when we do a recovery it is at the customer's request (e.g., they stepped on something, accidentally), so, we'll do a whole or partial restore based on that need; the restore process is always "informed" as best as possible so that we are not just arbitrarily rolling you back to some point in time without understanding the goals.  In some cases, a partial restore is sufficient.  We will help to inform this decision based on your goals.

 

Can you elaborate on the offsite archive?

Backups and archives are performed at the SAN level (Nimble Storage+VMWare APIs).  Backups are cloned as archives to a redundant SAN at our LA location at One Wilshire/CoreSite, where we operate secondary/backup and DR infrastructure.
 

What is the DR process you have in place?

I've attached a TOC for our DR process (much of it is redacted for security).  In the case of most failures we have full redundancy at the primary host facility.  In the case of a catastrophic failure of the physical plant, we would failover to one of our secondary locations.  In your case @OneWilshier in Los Angeles.  We have the ability to backhaul traffic, or route directly from LA.  And depending on the nature of the failure, can activate BGP to re-route public IPs.
 

RPO/RTO expectations and testing schedule?

Recovery testing is schedule quarterly but actuated much more frequently as a function of supporting our production customers.

The short answer is that we can recover from most failures transparently, or at least automatically by failover in our cloud.  All network and cloud infrastructure is fully redundant.  

Your Liferay portal may or may not be redundant depending the setup; it would have to be clustered.  Presently we are not discussing an application cluster.  So, the vhost nodes themselves are a single point of failure.  

If a physical server fails, the vhost will automatically restart on another server and automatically rejoin service (minutes, typically).  System failures that require intervention may take longer to resolve, but we are typically responding within 15 minutes.  

If a network layer were to fail, it is usually transparent due to redundancy. 

Practically speaking, our reaction is very fast and we will respond aggressively to any interruption or degradation in service, at any hour.  
 



No hay ningún comentario aún. Sea usted el primero.

Disaster Recovery

Q:

Is there a plan for Incident Response?

Do you have a Disaster Recovery Document?

Do you have policy and procedures which document your business continuity (BC) and disaster recovery (DR)?

Do you have BC/DR plans that assure the continuity of service and products provided to meet client's RTO and/or RPO?

Are roles and responsibilities documented in the contingency plans?

Do you conduct business impact analysis at least annually?

Do you provide contingency training to your staffs according to assigned roles and responsibilities at least annually?

Have you conducted BC/DR tests/exercises on this system with all appropriate parties in the last 12 months and revise the plans to address changes and problems encountered during implementation and testing?

Is the system included in your organization's business continuity and disaster recovery (BC/DR) plan?

In terms of crash and DR Omegabit offers multiple redundant layers of protection including but not limited to:

In terms of crash and DR recovery Omegabit offers multiple redundant layers of protection including but not limited to:

What type of business continuity and disaster recovery options are included as part of this solution? Is this part of the standard services?

How are the backup data stored?


A:

This is documented in Omegabit Internal Operations Wiki.

This is documented in Omegabit Disaster Recovery Handbook, Section 1.1 to 1.4 and Section 2.3

ref: Omegabit Disaster Recovery Plan TOC

Yes. Per agreed upon SLA. 

Yes. ref: Omegabit Disaster Recovery Plan TOC

Yes. ref: Omegabit Disaster Recovery Plan TOC

Yes. ref: Omegabit Disaster Recovery Plan TOC, Omegabit Operations Portal, and Training curriculums

Yes. The DR plan was recently exercised and updated in Q2 of 2017. A certified statement can be provided by executive management certifying this, provided the vetting proceeds to the next round.

ref: Omegabit Disaster Recovery Plan TOC

● Logical and physical redundancy at the VMWare, JVM, repository and other critical layers of the runtime environment stack

● Warm-spare redundant Liferay architecture (proposed)

● Server failover capability

● Rapid nearline backup recovery

● Comprehensive off site DR for catastrophic failure

In the event that a high-availability portal configuration is required, redundant nodes of the HA configuration will be purposefully isolated to discrete server and backend infrastructure as a complement to that logical HA configuration, to the benefit of higher reliability and faster recovery under various logical/physical architecture failure scenarios.

Omegabit operates comprehensive SNMP and service level monitoring of all configured hosts and services.  Triggers are adjustable and set by default to detect failures as well as symptoms of imminent failure.  Monitor alerts are responded to by live personnel, 24x7x365, and acted upon according to severity, per the terms of our SLA.

The core physical host infrastructure is inherently HA in terms of disk arrays, storage and network paths, physical servers, switching, etc.  Omegabit operates a modern VMWare based infrastructure.  In the case of most physical failures services are designed to continue transparently with no observable interruption to operations.  In the case of logical failures, the VM, JVM, and Liferay backend service configuration is proposed as an HA setup, to practical limits.  If a higher level of resilience is required than is proposed, we are able to accommodate that as additional scope.  Disaster Recovery (DR) is an inherent component of the regular day-to-day operations performed by Omegabit, as a core function of the hosting operations is supplied for all tenants.

Omegabit offers multiple redundant layers of protection including but not limited to:

● Logical and physical redundancy at the VMWare, JVM, repository and other critical layers of the runtime environment stack

● Warm-spare redundant Liferay architecture (proposed)

● Server failover capability

● Rapid nearline backup recovery

● Comprehensive off-site DR for catastrophic failure

Backups snapshots of the entire VM stack are performed every 2hrs, and the offsite archives of those backups are continuous to a second physical location.  Retention for 2hr snaps for 48hrs, dailys for 30 days, and weeklys for 16 weeks.  We can accommodate longer retention if necessary.  Some of these retention policies impact RPO.  

For PCI, you may want logs to last up to 1yr but, that can be accomplished through application design or by depending on our backups.  We recommend using both strategies depending on your reporting needs.

Backups should be considered for disaster recovery purposes only.  Our retention policy is variable and based upon data volume.  Depending upon the environment, rollbacks to the previous day, several days, weeks are available, but with sporadic snapshots between periods.  Therefore, a specific point-in-time recovery may not be possible.  We are typically able to restore backward up to several weeks depending upon the total size of your store.

 

Omegabit can provide additional backup and archival services to meet specific requirements on a needs basis.  Please contact your sales representative for more information.

 

Omegabit features a comprehensive alternate-site DR recovery plan that includes regular off-site archives using Omegabit owned and managed equipment.  Backup to the public cloud (e.g. Amazon), is optional but requires special arrangement and may not be compatible with some PII/HIPAA requirements.  Specific features for disaster recovery vary by tier of service; please see the SOW for complete details on RTO/RPO times and obligations.

 



No hay ningún comentario aún. Sea usted el primero.
Mostrando el intervalo 106 - 120 de 128 resultados.