Get answers and suggestions for various questions from here

Public comment order system sub-library practice



The order form of the original public comment has already exceeded 200 G. Since there are many query dimensions, even if two slave libraries are added and the index is optimized, there are still many unsatisfactory queries. Last year, a large number of rushing activities carried out the database, and the application reached the bottleneck. The application could only be protected by speed limit and asynchronous queues. The business requirements emerged in an endless stream. The original order model was difficult to meet the business needs, but the DDL based on the original order form was Very difficult to meet business requirements. As these issues become more prominent, the segmentation of the order database becomes more urgent.

This time, our goal is not to worry about the order capacity in the next ten years.

Vertical segmentation

First, the order library is vertically segmented, and the original order library is divided into a basic order library, an order process library, etc., and this article will not be discussed.

Horizontal segmentation

Vertical cuts eased the pressure on the original single cluster, but it was still stretched when snapped up. The original order model can no longer meet the business needs, so we designed a new unified order model, in order to meet the needs of C-end users, B-side merchants, customer service, operations, etc., we respectively through the user ID and merchant ID Segmentation and synchronization to an operational library via PUMA (our in-house developed MySQL binlog real-time resolution service).

Segmentation strategy

Query segmentation

Record the mapping relationship between the ID and the library in a separate library.

Advantages: The mapping algorithm for ID and library can be changed at will.
Disadvantages: Introducing additional single points.

2. Range segmentation

For example, according to the time interval or ID interval to segment.

Advantages: The size of the single table is controllable and the natural level is expanded.
Disadvantages: Can not solve the problem of centralized write bottlenecks.

3. Hash segmentation

Generally, mod is used to cut points. The following focuses on the mod strategy.

After the data level is segmented, we hope to be once or for all, or easy to scale horizontally, so it is recommended to use mod 2^n as a consistent hash.

Taking the unified order library as an example, our sub-library scheme is 32*32, that is, after the UserId, the four mod 32 points are divided into 32 libraries, and then the UserId is four digits after the Div 32 Mod 32. Divided into 32 tables, the total is divided into 1024 tables. The online deployment situation is 8 clusters (master-slave), and 4 clusters per cluster.

Why is this method easy to scale horizontally? We analyze the following two scenarios.

Scenario 1: Database performance reaches bottleneck

method one

According to the existing rules, you can directly expand to 32 database clusters.

Method Two

If 32 clusters are also unable to meet the demand, then the sub-database partitioning rules are adjusted to (32*2^n)*(32/2^n), and up to 1024 clusters can be reached.

Scenario 2: Single-table capacity reaches the bottleneck (or 1024 can't satisfy you)


If the single table has exceeded 200G, 200*1024=200T (according to the existing order model, about 1 billion orders, I believe this day, ah, just around the corner!), it doesn't matter, 32*(32*2^ n) At this time, the rules of the library are unchanged, and the table in the single library is fissured. Of course, under the current order (the four mods after userId) there is still a limit, because there are only four, so the maximum is 8192. For the table, why only take the last four, there will be a paragraph later.

The other dimension is segmented by ShopID. The rules 8*8 and UserID are similar, so I won't go into details. Note that the Shop library only stores the order master table, which is used to satisfy the Shop dimension query.

Unique ID scheme

There are also many programs, and there are several mainstream ones:

1. Use the database to increase the ID

Advantages: the easiest.
Disadvantages: single point risk, stand-alone performance bottleneck.

2. Utilize the database cluster and set the corresponding step size (Flickr scheme)

Advantages: high availability, simple ID.
Disadvantages: A separate database cluster is required.

3. Twitter Snowflake

Advantages: High performance, high availability and easy expansion.
Disadvantages: Need a separate cluster and ZK.

4. A large wave of GUID and Random algorithms

Advantages: Simple.
Disadvantages: The generated ID is longer and has a chance of repetition.

Our plan

In order to reduce operating costs and reduce additional risks, we have eliminated all solutions that require independent clusters and adopted a solution with business attributes:

Timestamp + user identification code + random number

There are several benefits:

  • Convenient and low cost.
  • There is basically no possibility of repetition.
  • With the own sub-library rules, the user identification code here is the last four digits of the user ID. In the query scenario, only the order number can be matched to the corresponding library table without the user ID. Only four digits are desired orders. The number is as short as possible, and it is enough to evaluate the four.
  • Can be sorted because the timestamp is at the top.

Of course, there are some disadvantages, such as a slightly longer length and a slightly worse performance than int/bigint.

other problems

  • Transaction support: We split the entire order domain aggregate and have the same dimensions, so we support the transaction of the aggregate.
  • Complex query: After vertical segmentation, it is said to be bye with the join; after horizontal segmentation, the conditions of the query must be within the dimension of the segmentation, such as querying the orders of a specific user, etc.; Dimensional queries, even if the middleware can support such queries, can be assembled in memory, but this requirement should often not be queried in the online library, or can be converted to the dimension of the split by other methods.

data migration

Database splitting is generally the optimization and re-engineering of the business after it has been developed to a certain scale. In order to support the rapid launch of the business, it is difficult to separate the sub-libraries at the beginning, and the vertical splitting is easy, and the data source is changed. Horizontal splitting, data cleaning is a big problem, for which we have gone through the following stages.

The first stage

  • Database double write (transaction success is subject to the old model), query the old model.
  • The daily job data is reconciled (via DW) and the difference is filled.
  • Lead historical data through the job.

second stage

  • Historical data has been imported and data reconciliation is correct.
  • Still the database is double-written, but the success of the transaction is subject to the new model, and the online query is a new model.
  • Daily job data reconciliation, the difference is filled.

The third phase

  • The old model is no longer synchronously written, but is only asynchronously added when the order has a final state.
  • At this stage, only the offline data still depends on the old model, and the downstream dependencies are very large. After the DW is completed, the old model can be completely abolished.

to sum up

Not all tables need to be split horizontally. Depending on the type and speed of growth, horizontal splitting is a big move. After splitting, it will increase the complexity of development, and it will not be used unless it is necessary.

In large-scale concurrent business, try to achieve online query and offline query isolation, transaction query and operation / customer service query isolation.

The choice of split dimension is very important, and it should be developed as much as possible on the basis of solving the problem before splitting.

The database is not as strong as you think, needs protection, try to use simple, well-indexed queries, so that the database is controllable overall, and it is easy to long-term capacity planning and horizontal expansion.

Finally, I would like to thank the great DBA team and the database middleware team for their great assistance in the project!

Don't want to miss the technical blog update? Want to comment on the article and interact with the author? The first time to get technical salon information?

Please pay attention to our official WeChat public number "Meituan Review Technical Team".