Our Partners close more business.

Use these powerful resources to win more business, faster, with less effort.  
Call 877-411-2220 x121 for personal support with any opportunity.

RESET SEARCH

Hosting Quote Estimator

GET a FREE Sandbox or Trial Environment NOW

How To Use This Tool:  

To find answers to common RFP and RFI questions, select a tag, or, search for terms like "security", "performance", etc.  You will find common questions and answers grouped together in one record.  Follow the tag links to refine your search.  Supporting downloads and documentation are available, below.

Please login to obtain download access to additional supporting documentation.  Registered users can also contribute to the database.  You can request access by Contacting Us.

© Omegabit LLC, 2023

Enter a Search Phrase or Select a Tag

Access Control

Q:

Do you have a process that authorizes and maintains a list of authorized personnel, consultants and vendor for maintenance activities? If yes, do you grant temporary credentials for one-time use or a very limited time period?

Do you allow non-local maintenance? If yes, do you employ multi-factor authentication for all sessions and network connections, and terminate connection once completed?


A:

Database, search and other ancillary services operating within the Client private infrastructure are exclusive to the use of the Client and are not shared with any other user, Client, or application except where explicitly intended by the Client application design. All databse services access is restricted by firewall, connecting client IP, unique users id, view restrictions, and strong passwords. Omegabit will implement the most secure (off before on) style of access control by default, and coordinate with the Client to make informed, security-aware changes where required for the operation of the hosted application.

Access of this nature is always chaperoned.

All administration links require two-token VPN linked authentication (pass+comlex trust key), or SSH tunnel, plus single factor authentication for console access, and additional secondary authentication for privileged access, by default. All restrictions and controls are configurable per Client requirements. Strong (15-char, complex), and unique passwords are employed, always. Optional Google two-token public authentication, digital certificates and personal keys are also supported on request. Hardware based two-token authentigtation integration for Client systems is also supported as a customization.

 



No comments yet. Be the first.

Access Control Policy

Q:

Do you have documented access control policy and procedures?


A:

This is part of the IT Security Handbook



No comments yet. Be the first.

Account Access approval

Q:

Do administration/privileged access accounts require additional approval by appropriate personnel (e.g., system owner, business owner, chief information security officer)?


A:

Yes.  As it relates to backend access, Clients may designate authorized approvers and any required workflow, e.g., validation from an independent Client Security Team, for approval. Access is only provided where explicitly  requested/approved, and access is strictly limited on a needs basis. Omegabit will recommend and follow best-practices but defer to the Client on the preferred method of approval and determining what level of access is appropriate for its administrative users.  As it relates to front-end (portal UI) access and control. This is typically under the direct management of the Client at implementation and can vary based on the desired workflow and use-case.  Omegabit is able to advise Clients on the use of Liferay access and permissions controls, and other considerations relating to PCI and similar compliance; e.g., encryption of designated data within the Liferay application database.  These options are available to Clients on request and are typically determined in collaboration with Client engineering teams at the time of the application design.

The details of the approval process are established at onboarding time and implemented as part of Omegabit's customer management workflow to help ensure quality of service for any/all requests.

 

Configured per Customer Operations Policy and SLA terms.



No comments yet. Be the first.

Account Access - automated process

Q:

Do you have an automated process to remove or disable temporary and emergency accounts after a predefined period of time?

Do you have an automated process to disable inactive accounts after a defined period of time?

Do you have an automated process to expire passwords on a periodic basis and users must change passwords within this period? If yes, at what frequency?


A:

This is a configurable setting in Liferay. This is an available option for Clients upon special request pertaining to Client hosted infrastructure.



No comments yet. Be the first.

Account suspended

Q:

Do you automatically suspend accounts after a maximum number of unsuccessful attempts? If so, what is that limit?

Do you require an administrator to unlock suspended accounts?


A:

This is a configurable setting in Liferay. This is an available option for Clients upon special request pertaining to Client hosted infrastructure.



No comments yet. Be the first.

Asset Management

Q:

Do you have an Asset Management Policy?


A:

Physical asset management is documented in the Omegabit Internal Operations Wiki as part of its asset controls for company servers and equipment. This information cannot be shared due to its proprietary and sensitive nature, but is comprehensive in nature and regularly updated to keep current with inventory control.



No comments yet. Be the first.

Asset Management - Inventory

Q:

Is there an asset management policy; and are all hardware and software assets maintained in an inventory system?

Do you employ automated mechanisms to help maintain an up-to-date, complete, accurate, and readily available inventory of system components?


A:

Yes, see previously supplied responses on this tab and tab 1 for related answers.

Inventory is regularly audited and confirmed against automatically reported metrics reported by monitoring systems.



No comments yet. Be the first.

Audit and audit records

Q:

Do your audit records contain detailed information such as full text recording of privileged commands or the individual identifies of group account users?

Do you have audit record storage capacity to maintain audit records for a significant amount of time?

Do you have documented audit and accountability policy and procedures?

Do you generate audit records that identify users and point in time when they accessed the system or service, and unauthorized access attempts?

Do you retain a list of auditable events that are adequate to support after-the-fact investigations of security events and audit needs? If yes, does the event list include execution of privileged functions?


A:

This is a configurable environment option.

Adjustable per Client requirements.

Auditing and documentation is extensive and method varies by task/layer of change in infrastructure; relevant changes are documented in customer facing change management logs. Additional automated auditing is available as part of a custom configuration at any/all layers of the infrastructure by combining the appropriate facilities for each layer (Omegabit change management, inside OS runtime, Inside Liferay runtime, etc.). Liferay also offers extensive customizable auditing features and capabilities for in-Liferay event logging Liferay and Omegabit confiturations are capable of supporting most any auditing requirement stipulated. Additional configuration and services fees may apply.

This is a configurable environment option.

This is a configurable environment option. Execution of privileged actions and escalation in the OS are logged. All facets of auditing and logging are configurable.

 



No comments yet. Be the first.

Audit Logs

Q:

What application and data access audit logs are available?


A:

By default Omegabit server environments are configured with warning-level logging for all services, and Web requests logging on, by default, in a 90-day rotation. All logs are directly accessible to the customer, and advanced aggregation and reporting tools, as well as custom reporting, is supported on an as needs basis (fees may apply). Omegabit is able to assist in configuring logging and verbosity at any layer of the infrastructure to meet specific business requirements or to trap specific issues.



No comments yet. Be the first.

Backups

Q:

When do you backup?

How often do you backup?

Do you conduct backups of user-level information, system-level information and information system documentation including security-related documentation; and protects the confidentiality, integrity, and availability of backup information at storage locations?

Are the backup and restore process and procedures reviewed, and backed up information validated at least annually?

What is the backup schedule and retention on these systems?

If there is an issue, what is the process for a restore?

Can you elaborate on the offsite archive?

RPO/RTO expectations and testing schedule?


A:

 

In the case of most failures Omegabit features full redundancy and fault tolerance at the primary host facility as a function of the private cloud infrastructure.  Full Disaster Recovery is only initiated in the event of a catastrophic facilities failure.

In the event of a catastrophic failure of the physical plant, Client services will failover to one of our secondary DR locations.  Omegabit has the ability to backhaul traffic between private NOCs/POPs. Or, route directly from the DR location.  And, depending on the nature of the failure, can activate BGP to re-route public IPs.

A DR process TOC is available for review on request (much of it is redacted for security).  In the case of most failures, Omegabit provides full redundancy at the primary host facility.  

The standard SLA terms apply. The formal promises for critical faults are (summarized - see SLA for more details):

An initial response time within 2 hours is promised for Severity I issues, and 4 hours for Severity II issues, regardless of notification by automated alert or customer contact.
(actual response time is typically <15minutes for critical issues)

For non-catastrophic events, e.g. equipment or primary storage failure, an RTO not to exceed 12 hours is promised, with an RPO not to exceed 24 hours[1]. 

[1] Assumes worst-case; in a typical failure scenario our redundant cloud infrastructure can tolerate a failure, e.g. a server node, switch path, or disk failure, transparently, or with minor administrative intervention, and recovery in <1hr with no loss of data.

For catastrophic events requiring comprehensive relocation of service to a separate hosting facility, an RTO not to exceed 48 hours is promised with and RPO not to exceed 2 weeks[2] (15-days).
[2] Special terms and retention policies are available on request.  Assumes worst-case disaster recovery scenario from offsite archives; the RPO in this "catastrophic" scenario is more typically <48hrs, from near-line backup.

Please see the supplied copy of the SOW and the sections on backups and Support Ticket and Escalation Procedures for more details. 

This is what is promised OOTB.  Omegabit can accommodate any additional requirements around these expectations as a special request - including hot DR failover.  But, substantial additional costs will apply for both DXP licensing and infrastructure.  

What is offered OOTB is typically the best balance of cost and protection, practically speaking.  If you require more, we'll support it.


Please see the attached copy of the SOW and the sections on backups and Support Ticket and Escalation Procedures for more details. 

This is what is promised OOTB.  We can accommodate any additional requirements around these expectations as a special request - including hot DR failover.  But, substantial additional costs will apply for both DXP licensing and infrastructure.  What we offer OOTB is typically the best balance of cost and protection, practically speaking.  If you require more, we'll support it.

summary:

Backups snapshots of the entire VM stack are performed every 2hrs, and the offsite archives of those backups are continuous to a second remote physical location.  Retention for 2hr snaps for 48hrs, dailys for 30 days, and weeklys for 16 weeks.  We can accommodate longer retention if necessary.  Some of these retention policies impact RPO.  For PCI, you may want logs to last up to 1yr but, that can be accomplished through application design or by depending on our backups.  We recommend using both strategies depending on your reporting needs.

For PCI, you may want logs to last up to 1yr but, that can be accomplished through application design or by depending on our backups.  We recommend using both strategies depending on your reporting needs.

Backups should be considered for disaster recovery purposes only.  Our retention policy is variable and based upon data volume.  Depending upon the environment, rollbacks to the previous day, several days, weeks are available, but with sporadic snapshots between periods.  Therefore, a specific point-in-time recovery may not be possible.  We are typically able to restore backward up to several weeks depending upon the total size of your store.

Backups are automated using a combination of VMWare and Nible Storage technolgies. 

Backups are comprehensive and cover all aspects of internal and Client operations using a  vm snapshot based approach for rapid, transparent backup and recovery.

Backup and recovery procedures are exercised several times per month as a function of normal operations and to support snapshot and rollbacks for customers that are part of their normal development activities.

 

If there is an issue, what is the process for a restore?

We can restore any VM snapshot on file in a matter of minutes (it usually takes about 20 minutes to mount the backup partition and re-fire the image).  Recovering items inside that image is a matter of logging in and parsing the necessary data (files, db backups, etc.).  Typically, if there is a need to restore, we will recover all the dependent VM nodes and simply restart them.  In most cases when we do a recovery it is at the customer's request (e.g., they stepped on something, accidentally), so, we'll do a whole or partial restore based on that need; the restore process is always "informed" as best as possible so that we are not just arbitrarily rolling you back to some point in time without understanding the goals.  In some cases, a partial restore is sufficient.  We will help to inform this decision based on your goals.

 

Can you elaborate on the offsite archive?

Backups and archives are performed at the SAN level (Nimble Storage+VMWare APIs).  Backups are cloned as archives to a redundant SAN at our LA location at One Wilshire/CoreSite, where we operate secondary/backup and DR infrastructure.
 

What is the DR process you have in place?

I've attached a TOC for our DR process (much of it is redacted for security).  In the case of most failures we have full redundancy at the primary host facility.  In the case of a catastrophic failure of the physical plant, we would failover to one of our secondary locations.  In your case @OneWilshier in Los Angeles.  We have the ability to backhaul traffic, or route directly from LA.  And depending on the nature of the failure, can activate BGP to re-route public IPs.
 

RPO/RTO expectations and testing schedule?

Recovery testing is schedule quarterly but actuated much more frequently as a function of supporting our production customers.

The short answer is that we can recover from most failures transparently, or at least automatically by failover in our cloud.  All network and cloud infrastructure is fully redundant.  

Your Liferay portal may or may not be redundant depending the setup; it would have to be clustered.  Presently we are not discussing an application cluster.  So, the vhost nodes themselves are a single point of failure.  

If a physical server fails, the vhost will automatically restart on another server and automatically rejoin service (minutes, typically).  System failures that require intervention may take longer to resolve, but we are typically responding within 15 minutes.  

If a network layer were to fail, it is usually transparent due to redundancy. 

Practically speaking, our reaction is very fast and we will respond aggressively to any interruption or degradation in service, at any hour.  
 



No comments yet. Be the first.

Backups - alternate location

Q:

Do you conduct backups of user-level information, system-level information and information system documentation including security-related documentation; and protects the confidentiality, integrity, and availability of backup information at storage locations?


A:

Local SAN and backup snapshots occur twice, daily of all operational and Client/tenant data. Offsite archives of backup snapshots operate continuously and is typically <5 minutes behind local backup via secure high-speed transver over a dedicated fiber link to an alternate facility over a privately managed switched circuit with tunneled encryption. See the SLA for retention details; standard retention is stated as "Backups should be considered for disaster recovery purposes only. Our retention policy is variable and based upon data volume. Depending upon the environment rollbacks to the previous several days, weeks are available, but with sporadic snapshots between periods. Therefore, a specific point-in-time recovery may not be possible. We are typically able to restore backward up to several weeks depending upon the total size of your store." A 45 day retention is typical. However, custom retention policies are easily accomodated if a more specific policy is required, on request.



No comments yet. Be the first.

Backup Testing

Q:

Provide Evidence of last BC/DR test and results.

Do you have Backup and DR test and results?


A:

Passed with no successful exploits or exceptions; May 2017; details cannot be divulged due to its proprietary and sensitive nature.

Omegabit features an comprehensive and robust high-availability host infrastructure with redundancy and alternate location disaster recovery capabilities including multiple layers of data backup and archive. This includes daily local SAN and backup snapshots, and offsite archives. See SOW-SLA for standard terms, which can be adjusted to meet the specific needs of this implementation. Passed last tests with no successful exploits or exceptions; May 2017; details cannot be divulged due to its proprietary and sensitive nature.



No comments yet. Be the first.

Change management - Security impact

Q:

Do you conduct security impact analysis for changes to systems by qualified security professionals prior to change implementation?


A:

Yes



No comments yet. Be the first.

Computing devices

Q:

Do you prohibit remote activation of collaborative computing devices (e.g. networked white boards, cameras, and microphones) with the following exceptions: Help Desk Support; and provide an explicit indication of use to users physically present at the devices?


A:

These tools are used responsibly and with sensitivity to the transfer of secure information by trained and authorized personnel for insecure communications.



No comments yet. Be the first.

Configuration Management

Q:

Are all developers required to comply with configuration management process?


A:

Liferay: yes. Omegabit: yes. These methods are based on Liferay best practices and recommended methodologies designed to streamline and provide substantive versioning, release and code promotion controls, and to help ensure that runtime environments are compatible with source and build systems. Many of these controls are inherent in the way the sandbox and development lifecycle is implemented by Liferay,initially, is managed by Omegabit, and followed as a matter of practice by Client developer teams. Omegabit's responsibility is in part to help ensure continuity across the teams, over time, as the application design and requirements evolve.



No comments yet. Be the first.