Our Partners close more business.

Use these powerful resources to win more business, faster, with less effort.  
Call 877-411-2220 x121 for personal support with any opportunity.

RESET SEARCH

Hosting Quote Estimator

GET a FREE Sandbox or Trial Environment NOW

How To Use This Tool:  

To find answers to common RFP and RFI questions, select a tag, or, search for terms like "security", "performance", etc.  You will find common questions and answers grouped together in one record.  Follow the tag links to refine your search.  Supporting downloads and documentation are available, below.

Please login to obtain download access to additional supporting documentation.  Registered users can also contribute to the database.  You can request access by Contacting Us.

© Omegabit LLC, 2023

Enter a Search Phrase or Select a Tag

Content with tag scalability .

Hosting Provider - Scalable

Q:

Cloud auto-scale policies will be used by administrators to pre-configure automatic expansion to ensure peak performance during high usage, while minimizing cost during low usage. This includes horizontal auto-scaling (additional servers) and vertical auto-scaling (additional CPU/RAM). In addition, modern Operating Systems will allow hot-add for CPU/RAM, if required.


A:

All Operating Systems are hot-add capable. Auto scaling is available. Some types of scaling changes necessitate a restart of a component of the application stack. The proposed infrastructure is designed to facilitate transparent changes wherever practical. Note that expansion of application server resources may have licensing implications. Resources are billed on a month-to-month basis and available on-demand. A role of Omegabit is to be able to help the Client anticipate and make adjustments that ensure a satisfactory level of service for users during peak periods and unexpected events without being reactive.  As well as to make most efficient use of allocated resources throughout the course of operations and as demand and use case evolves. Omegabit's managed approach typically provides better efficacy with similar cost control as to fully-automated approaches by ensuring that resources are not frequently auto-scaled only to accommodate some other inefficiency in the application implementation or, where other optimizations to hep normalization utilization can be of specific benefit.



No comments yet. Be the first.

Scalability

Q:

Please describe your ability to scale your processing capacity. How long does it take to implement more processing capacity?

Can you share any information on application scalability?

Is there a breaking point where a new server is required?


A:

All scalable resources are immediately available on-tap. Omegabit's private, Liferay optimized cloud infrastructure is able to add or redistribute additional resources including CPU, RAM, and storage on-demand, and to help to assess where allocations can be of benefit under specific load conditions. All resources can be added or removed in any increment on a month-to-month basis. Complete additional server nodes including OS and application configuration can typically be provisioned in 24-48 hours. The preference is always to preempt surprise increases to capacity through well designed testing and optimization tuning.

 

Unfortunately these are loaded questions.  The variables include whether the users are logging in, what the pages contain and how they are implemented, and the workflow.  Relative to other platforms, Liferay scales extremely well, assuming it has been properly implemented:  for example, it is not enough to write a Java application and wrap it in Liferay for devilry; the APIs must be used for data and web services to gain the benefits of Liferay caching and optimization.  We've seen well built apps scale incredibly well, and poorly built ones fall apart with very few users.

I have attached a Liferay white paper that explains the scale of the platform, OOTB, under specific (highly optimized, obviously), use case conditions.  The catch, of course, is that there are no custom plugins or workflow, and they were selective in implementing the features in a manner that is least expensive.  

Liferay claims:
• To help accurately demonstrate "enterprise scale," this study was commissioned with:
    • 1 million total users
    • 2 million documents with an average of 100KB per document
    • 10,000 sites with 50% of the sites having at least 5 children
    • 4 million message forum threads and posts
    • 100,000 blog entries and 1 million comments
    • 100,000 wiki pages

The key findings of the study are:
    • 1  As an infrastructure portal, Liferay Digital Enterprise can support over 36,250 virtual users on a single server with mean login times under 378 ms and maximum throughput of 1020+ logins per second
    • 2  The platform's Document Repository easily supports over 18,000 virtual users while accessing 2 million documents in the document repository
    • 3  The platform's WCM scales to beyond 300,000 virtual users on a single Liferay Digital Enterprise server with average transaction times under 50ms and 35% CPU utilization.
    • 4  In collaboration and social networking scenarios, each physical server supports over 8,000 virtual concurrent users at average transaction times of under 800ms
    • 5  Given sufficient database resources and efficient load balancing, Liferay Digital Enterprise can scale linearly as one adds additional servers to a cluster. With a properly configured system, by doubling the number of Liferay Digital Enterprise servers, you will double the maximum number of supported virtual user

Note that their infrastructure is based on bare metal servers running at clock speeds that are more typical of desktop systems:

1Web Server
    • 1 x Intel Core i7-3770 3.40GHz CPU, 8MB L2 cache (4 core, 8 HT core)
    • 16GB memory
2 Application Server
    • 2 Intel Xeon E5-2643 v4 3.40GHz CPU, 20MB L2 cache (6 core, 12 HT core)
    • 64GB memory, 2 x 300GB 15k RPM SCSI
3 Database Tier
    • 2 Intel Xeon E5-2643 v4 3.40GHz CPU, 20MB L2 cache (6 core, 12 HT core)
    • 64GB memory, 4 x 146GB 15k RPM SCSI

All respect to Liferay, this is not realistic.  The actual capacity of most production systems is about 1 order of magnitude lower based on our real-world observations of portal implementations of all shapes and sizes.  It is feasible to reach this level of capacity under very narrow conditions.  Customizations can have much impact.  And, real-world implementations are typically much more transactionally expensive than this test case for practical reasons (look, usability, content value, etc.).  Caching and optimization aside, if your page is 5x more expensive to render, you can expect a 5x impact on this economy of scale.  In the real world that is frequently the case.

Our experience tells us that in most cases you can plan for about about 3.5-4K concurrent and active authenticated sessions per 8-core app server under heavy use, while maintaining a good page-paint time of 3-5s (that is, users that are logged in and actively clicking; you can have many more inactive authenticated users with no performance impact given enough memory) —given sufficient db and support infrastructure.  In some use cases, it may actually be much better.  Also, Liferay tolerates overloads extremely well.  So, more users will typically mean longer page load times but still be able to handle the demand.  We can accommodate custom bare-metal setups, but, they are not economical and do not have the same benefits as the cloud severs, which more typically run @2.4-2.6Ghz.  That said, our 8-core AMD based servers are typically faster, compared to the 6-core HT servers Liferay has used for their bench.  We get similar or in dome cases higher economies of scale due to thread concurrency.

Your application use case - at least the one we have discussed so far, is very simple and I expect that you will see the upper-end of the scale in terms of achieving Liferay's "best speed", assuming good application design practices.  I do believe that you will need to add additional CPUs and eventually break out services in a more horizontal configuration to accommodate high concurrent demand.

What we are prospering as a starting point for you is intended to handle hundreds of concurrent public/unauthenticated users and anywhere from tens-to-hundreds of authenticated concurrent users depending on use-case (this is a partially-qualified SWAG, at best).  Scale can be achieved by both adding CPU (and sometimes RAM) resources, and also by breaking out the services to separate hosts (app, db, search, web acceleration).

For what it is worth, we are here to help and are more well equipped to squeeze the most out of your infrastructure and to help you make informed decisions as to how to scale.



No comments yet. Be the first.