在Apache Flink中,有几种方法调用可以触发执行任务,包括以下几种:
execute()
方法:这是启动Flink作业执行的方法,调用该方法将触发作业的执行。StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
// 定义作业逻辑
DataStream dataStream = env.fromElements("Hello", "World");
dataStream.print();
// 执行作业
env.execute("My Job");
source.map()
、source.filter()
等,这些方法会触发作业执行。DataStream source = env.fromElements("Hello", "World");
DataStream result = source.map(new MapFunction() {
@Override
public Integer map(String value) throws Exception {
return value.length();
}
});
result.print();
map()
、filter()
、reduce()
等,这些方法也会触发作业执行。DataStream source = env.fromElements("Hello", "World");
DataStream result = source.map(new MapFunction() {
@Override
public Integer map(String value) throws Exception {
return value.length();
}
}).filter(new FilterFunction() {
@Override
public boolean filter(Integer value) throws Exception {
return value > 5;
}
});
result.print();
executeAndCollect()
方法:该方法与execute()
方法类似,但它不仅执行作业,还将结果收集到本地变量中。DataStream source = env.fromElements("Hello", "World");
DataStream result = source.map(new MapFunction() {
@Override
public Integer map(String value) throws Exception {
return value.length();
}
});
List resultList = env.executeAndCollect("My Job", result);
for (Integer value : resultList) {
System.out.println(value);
}
这些方法调用将触发Flink作业的执行,执行过程中将会依次执行定义的算子逻辑。