Big Objects: Top 5 means to optimize them for maximum benefit

According to the current industry trends, data is the oil that drives the engine of enterprise growth and success. A massive volume of this data is generated on a daily basis that needs constant monitoring and managing. This poses a major problem when storage of this data becomes a hassle. Keeping in mind the common problem of enterprises ultimately running out of their limited data storage, Salesforce came up with the innovative ‘Big Objects’ solution in 2018.

Big Objects is Salesforce’s big data-based solution that allows an enterprise to store and manage an enormous amount of data (usually billions of records or more) within the Salesforce platform. These Big Objects serve a dual purpose: they can be used to archive data from other objects or they can be used to transfer massive datasets, containing customer information, from external systems to internal Salesforce systems. Their premier feature is the guarantee of consistent performance whether the archival data is 1 million or 1 billion. Another prominent factor is the ease of accessibility to one’s organization (employees) or external systems (clients) powered by a standard set of APIs. Broadly, Big Objects are classified into two types:

  • Standard Big Objects– Defined by Salesforce itself and included in its products list, Standard Big Objects are straight out of the box and are non-customizable. 
  • Custom Big Objects– Contrary to Standard Big Objects, Custom Big Objects are defined and deployed by the organization itself, through the Metadata API or from Setup directly, to archive information unique to their organization. 

The millions of data records stored in such Big Objects, typically referred to as Large Data Volume or LDV, can be identified to proactively personalize and optimize the Salesforce system. This in turn ensures optimum results in terms of enterprise growth and success. It also improves the CRM’s internal performance as well as enhances the overall user experience. But to achieve all these benefits, Big Objects must be optimized to perform to their highest potential. This can be done in the following ways:

  • Database Statistics– Modern databases gather statistics on the amount and types of data stored inside of them which is then used to access data better and execute queries efficiently. As a result, when large amounts of data are created, updated or deleted using the API, this nightly statistics-gathering process can be used to understand data growth and plan for data maintenance and LDV strategy accordingly. 
  • Skinny Tables A skinny table is a custom table in the Salesforce platform that contains a subset of frequently-used fields from standard or custom Salesforce objects while avoiding linking joins. They are kept in sync with their source tables when the source tables are modified and are useful to resolve specific long running queries, typically for Big Objects with millions of records. Using them, performance for read-only operations on reports, list views, and SOQL can be enhanced.
  • Force.com Query Plan– These Query Plans help Salesforce developers in understanding effective execution plans for SOQL queries and can be used to optimize and speed up queries performed over large data volumes stored in Big Objects. They can be used in place of reports or list views that take a long time to return results.
  • Custom Indexing– Salesforce supports the feature of custom indexing to speed up queries in Big Objects. They are useful for specific Force.com SOQL queries that need to work selectively using a non-indexed field. Custom indexes, when used improperly, can actually slow query results due to which it’s best to create filter conditions that are selective so Force.com scans only the rows necessary in the objects your queries target. Such custom indexes can be requested by contacting the Salesforce Customer Support. 
  • Data Divisions– Divisions can segment an enterprise’s voluminous data stored in Big Objects into different logical sections which makes searches, reports, and list views more meaningful to users. They are ideal for organizations with extremely large amounts of data that can be logically segregated by region, territories or any other criteria. When setting up divisions, you must create relevant divisions and assign records to proper divisions to ensure that your data is categorized effectively.

At this juncture, it is also important to point out that Big Objects are merely a storage option and in order to use them to archive the historical data, an archiving solution is necessary. DataArchiva is one such data archiving tool which is the first NATIVE Salesforce data archiving solution powered by these Big Objects. By periodically archiving the rarely-used legacy data, DataArchiva can potentially save 85%+ of the enterprise’s data storage costs and improve the CRM’s performance by two times. To know more about it, please get in touch with us.

Related Post

da-logo-wt

DataArchiva is the ONLY Native Data Archiving Solution for Salesforce using Big Objects that help Salesforce application users archive their historical data without losing data integrity.

For more info, please get in touch with us  [email protected]