Flink 1.7 stable release: What are the benefits of new featu

Last updated Jan, 28th 2019

Introduction: The Apache Flink community announced the release of Apache Flink 1.7.0. The latest release includes more than 420 resolved issues and some new additions to Flink, which will be described in the following sections of this article.

Problem Guide

1. Which version of Scala does Flink1.7 support?

2. What are the benefits of Flink 1.7 state evolution in actual production?

3. What can I do to support rich connections in the SQL/Table API?

4. What connectors have been added to Flink 1.7

The Apache Flink community announced the release of Apache Flink 1.7.0. The latest release includes more than 420 resolved issues and some new additions to Flink, which will be described in the following sections of this article.

I. Overview

In Flink 1.7.0, it is closer to achieving fast data processing and seamlessly building data-intensive applications for the Flink community. The latest version includes some new features and improvements, such as support for Scala 2.12, exactly-once S3 file sink, integration of complex event handling and streaming SQL, and more features below.

Second, new features and improvements

1. Flink supports Scala 2.12:

Apache Flink 1.7.0 is the first version to fully support Scala 2.12. This allows users to write Flink applications with newer versions of Scala and leverage the Scala 2.12 ecosystem.

2. Support for state evolution

In many cases, long-running Flink applications need to change throughout their lifecycle due to changes in demand. Changing user state without losing the current application progress is a key requirement for application development.

Added Flink 1.7.0 to the communityState evolution allows for flexible adjustments to the user state mode of long-running applications while maintaining compatibility with previous savepoints. By state evolution, you can add or remove columns in state mode to change the business functions that should be captured after the application is deployed.

When using Avro-generated classes as user states, state pattern evolution can now be out of the box, which means that state patterns can evolve according to Avro's specification

. While the Avro type is the only built-in type in Flink 1.7 that supports schema evolution, the community further extends support for other types in future Flink releases.

3. S3 StreamingFileSink implements Exactly-once

Flink 1.6.0 introduced in StreamingFileSink has now been extended to support writing to S3 file system, only one processing guarantee. Using this feature allows users to build a one-time end-to-end pipeline that writes to S3.

4. Streaming SQL supports MATCH_RECOGNIZE

This is an important addition to Apache Flink 1.7.0, which provides initial support for the MATCH_RECOGNIZE standard for Flink SQL. This feature, combined with Complex Event Processing (CEP) and SQL, makes it easy to pattern match on the data stream to implement a whole new set of use cases. [This function is in the test phase]

5. Support for Flinking in the Flink SQL / Table API

The Temporal table is a new concept in Apache Flink that provides a (parameterized) view of the table's change history and at a specific point in time. Returns the contents of the table.

For example, we can use a table with a historical currency exchange rate. Over time, this form has grown/changed and new exchange rates have been added. The Temporal table is a view that returns the actual state of these exchange rates to any given point in time. Using such a form, you can use the correct exchange rate to order different currencies.Single stream is converted to a common currency.

Temporal Joins allows the use of processing time or event time to use a constantly changing/updated table for Streaming data connections for memory and computational efficiency in compliance with ANSI SQL.

6. Other features of streaming SQL

In addition to the main features mentioned above, Flink's Table&SQL API has been extended to more use cases. The following built-in functions have been added to the

API: TO_BASE64, LOG2, LTRIM, REPEAT, REPLACE, COSH, SINH, and the TANHSQL Client now supports defining views in environment files and CLI sessions. In addition, the basic SQL statement auto-completion feature has been added to the CLI.

The community added an Elasticsearch 6 table sink that allows the storage of dynamic table updates.

7. Versioned REST API

Starting with Flink 1.7.0, the REST API has been versioned. This guarantees the stability of the Flink REST API, so third-party applications can be developed for stable APIs in Flink. Therefore, future Flink upgrades do not require changes to existing third-party integrations.

8. Kafka 2.0 Connector Apache Flink 1.7.0 continues to add more connectors (Connectors), making it easier to interact with more external systems. In this release, the community added the Kafka 2.0 connector, which allows reading and writing to Kafka 2.0 with one-time guarantee.

9. Local Recovery

Apache Flink 1.7.0 completes the local recovery function by extending the Flink schedule to consider the previous deployment location when recovering.

If local recovery is enabled, Flink will keep a local copy of the latest checkpoint on the computer running the task. By scheduling the task to its previous location, Flink will pass the local magneticThe disk reads the checkpoint status to minimize the network traffic in the recovered state. This feature greatly improves recovery speed.

10. Deleting the traditional mode of Flink

Apache Flink 1.7.0 marks the version of Flip-6 that has been fully completed and functional parity with traditional mode. Therefore, this release removes support for legacy mode. If you want to use the traditional mode, you can use Flink1.6.

This article was written by the author of the cutting-edge technology. The views represent only the author and do not represent the OFweek position. If you have any infringement or other problems, please contact us.