Talend Big Data v7 Certified Developer
Add-OnThis service comes with additional cost.
Talend certification exams are designed to be challenging to ensure that you have the skills to successfully implement quality Talend Big Data projects. Preparation is critical to passing.
This certification exam covers Talend Big Data Basics, Talend Big Data Advanced – Spark Batch, and Talend Big Data Advanced – Spark Streaming learning plans. The emphasis is on the Talend Big Data architecture, Hadoop ecosystems, Spark, Kafka, and Kerberos.
Certification exam details
Number of questions: 65
Exam duration: 65 minutes
Types of questions:
-
Multiple choice
-
Multiple response
Recommended experience
-
At least six months of experience using Talend products
General knowledge of Hadoop (HDFS, MapReduce v2, Hive, HBase, Sqoop, YARN), Spark, Kafka, the Talend Big Data architecture, and Kerberos
-
Experience with Talend Big Data 7.x solutions and Talend Studio, including metadata creation, configuration, and troubleshooting
Preparation
To prepare for this certification exam, Talend recommends:
-
Taking the Big Data Basics, Big Data - Spark Batch, Big Data - Spark Streaming learning plans
-
Studying the training material in the Talend Big Data v7 Certified Developer preparation training module
-
Reading the product documentation and Community Knowledge Base articles
Badge
After passing this certification exam, you are awarded the Talend Big Data Developer Certified Practitioner badge. To know more about the criteria to earn this badge, refer to the Talend Academy Badging program page.
Ready to register for your exam?
Connect to Talend Exam to register.
Certification exam topics
Big Data in context
-
Define Big Data
-
Understand the Hadoop ecosystem
-
Understand cloud storage architecture in a Big Data context
Basic concepts
-
Define Talend metadata stored in the repository
-
Understand the main elements of Hadoop cluster metadata
-
Create Hadoop cluster metadata
-
Create additional metadata (Hadoop Distributed File System, HDFS; YARN, Hive)
Read and write data (HDFS, cloud)
-
Understand HDFS
-
Use Studio components to import Big Data files to and export them from HDFS
-
Use Studio components to import Big Data files to and export them from the cloud
HBase
-
Understand HBase principles and usage
-
Use Studio components to connect to HBase
-
Use Studio components to export data to an HBase table
Sqoop
-
Understand Sqoop principles and usage
-
Create database metadata for Sqoop
-
Use Studio components to import tables to HDFS with Sqoop
Hive
-
Understand Hive principles and usage
-
Create database metadata for Hive
-
Use Studio components to import data to a Hive table
Standard, batch, and streaming Jobs
-
Understand the differences between standard, batch, and streaming Jobs
-
Know when to use a standard, batch, or streaming Job
-
Migrate Jobs
Hadoop
-
Use Studio components to process data stored in a Hive table
-
Analyze Hive tables in the Profiling perspective
-
Understand MapReduce Jobs in Studio
-
Create a Big Data batch MapReduce Job to process data in HDFS
Spark
-
Understand Spark principles and usage
-
Set up Spark batch Jobs
-
Set up Spark Streaming Jobs
-
Troubleshoot Spark Jobs
-
Optimize Spark Jobs at runtime
YARN
-
Understand YARN principles and usage
-
Tune YARN
-
Monitor Job execution with web UIs
-
Use Studio to configure resource requests to YARN
Kafka
-
Understand Kafka principles and usage
-
Use Studio components to produce data in a Kafka topic
-
Use Studio components to consume data from a Kafka topic
Big Data Streaming Jobs
-
Understand Big Data Streaming Jobs in Studio
-
Tune Streaming Jobs
Setting up a Big Data environment
-
Talend architecture and Big Data
-
Kerberos and security