Pinterest’s FEED architecture and algorithm

The FEED message stream of the Pinterest Home is the earliest to sort according to the PIN (Similar Weibo) aggregation according to the user’s attention object, and the later version of the FEED system gives up the natural order, but according to certain The rules and algorithms are designed, and the internal referred to as SMART feed, its algorithm and architecture are consolidated according to their public information, which is worthy of the technical architect of information flow products in the industry.

Pinterest Each user’s home page is personalized. The Pinterest system approximately 1/3 traffic points to the feed page, so it is one of the most critical pages of the entire system. How to achieve 99.99% availability when engineers develop new SMART feeds, is also one of the metrics that measure whether the project is successful.

The main algorithms and rules of Pinterest Smart feed are as follows

PINs from different sources are available in different frequency aggregates.

Remove the PIN according to the algorithm and weights, and the low-quality publishing source does not have to be displayed every time, the system can have a selection decision which immediately appears, which delay display. The quality of the PIN is measured from the point of view of the currently received user.

PIN Sorting logic is the best priority, not the latest priority. Some Pins that publish the source may be the latest, but there are some possible new PIN priorities that publish the source.

Pinterest Feed is mainly composed of the following parts, the leftmost data source, the rightmost side is the Pinterest waterfall flow seen by the user. The three services in the middle are described below.

Feed worker

Feed Worker Responsibilities: Receive new PINs and give PIN weights and saves depending on the received users. With the same pin, different receiving users have different weight scores.

There are three main sources of new PIN: paying attention to users, related content, concerned about relationships. Worker will be inserted into a pool after you score each source, each pool is a priority queue for a single user (Priority Queue, that is, the priority is high).

Since Feed Worker is stored in accordance with the user’s dimension, all PINs have been distributed in accordance with concern relationships (ie, the FEED push mode usually said).

Feed Content Generator

Feed Content Generator is responsible for returning the user last time after the new PIN. Content Generator can return the previous N or all new PINs, the user gets (ie, browsing) PIN will be removed from the Pool. Content Generator can rearrange multiple published locs in accordance with certain rules, but cannot change the priority of the original priority queue returned. That is, high priority in the queue will be taken priority.

Smart feed service

The materialization view is used to save the snapshot of the user’s last FEED list. This service does not need to reorder the feed, it saves the last time the last time is completely saved in the order of the time, because it belongs to the list of historical lists that the user has read, there is less read and write, so it can provide better availability . In addition, since the length of the historical list can limit the length of the historical list, the storage space is controllable, so it can increase the availability of access from the library at lower cost.

Feed relies on Content Generator to provide new PIN, if Content Generator is not available, the service can be elegantly downgraded, and the user can still get a list of historical stores, return the results of the material storage.

Pinterest passes the above three different services, which realizes the flexible control of Feed returns, and each service has its own clear duty, and each user has a target of personalization returning content.

FEED storage

Pinterest’s FEED storage needs to solve the following needs:

Write a new published feed, because Pinterest uses push mode, this scene needs to face a high write QPS, but users can tolerate a certain write delay.

Get the materialization of the homepage, relatively small with the write QPS, but the user’s delay is low.

Delete feed.

A simple design method can be used, for example, write all feed to a storage, you can simply implement access, update, and delete functions. At the current current access scale of Pinterest, there are hundreds of data and millions of access per second. After comprehensive assessment, choose HBase to achieve the above needs, and the Pinterest business scene needs to provide very high read and write and update operations, HBase also provides higher read and write and update access performance.

When the user publishes the PIN to all the fans, his fans may be shaped to all HBase Regions, so a distribution operation may have to access multiple regions and lock the WAL log for each Region. , Then update the unlock. Each time the WRITE / DELETE / UPDATE operation locks WAL is very inefficient, and soon become the bottleneck of the system. A better way is to perform batch of HBase, and can increase HBASE throughput, but on the other hand, the delay of access is increased. . In order to meet different needs, Pinterest is designed to use a method of dual HBase cluster, and write data in different phases to different HBase clusters, please refer to the illustration.

ZEN is a service that provides pictures (Graph) storage based on HBASE.

SMARTFeed Worker After distributing the content published by the user, saved in HBase, asynchronous processing task is called through the PINLATER service.

Smartfeed ContentGenerator is responsible for returning the latest PIN and scores and sort.

When the user refreshes the feed requests your home page, the SmartFeed service returns to the user from the Content Generator and the materialized HBase, if the service request is timeout, the system can still return to the user. In the background, SmartFeed will delete the material stored data from the left store.

In the actual scene, the data of the materialized storage of HBase is far more than the data of the published pool, so the request will be very fast.

FEED high availability

After the above designs, the availability of the system is equivalent to the availability of materialized storage HBase. The HBase cluster currently has a risk of GC Carton, and a single-point fault Region migration, so it is difficult to ensure 99.99% of the use of a single HBase cluster.

To solve this problem, an alternate cluster is enabled in another EC2 available area, and the data written to the primary cluster will synchronize to another cluster in hundreds of milliseconds. When the primary cluster is not available, data can be returned from the alternate cluster to the user request. Through the above design, the availability of the entire system reaches 99.99% (not included).

Reference

Http://playining.tumblr.com/post/105293275179/building-a-scalable-and-available-home-feed

Https://engineering.pinterest.com/blog/building-scalable-and-available-home-feed