Link Search Menu Expand Document

Connect Soda to Apache Spark

Last modified on 20-Nov-24

For Soda to run quality scans on your data, you must configure it to connect to your data source.
To learn how to set up Soda and configure it to connect to your data sources, see Get started.

Spark packages
Connect to Spark DataFrames
Use Soda Library with Spark DataFrames on Databricks
Connect to Spark for Hive
Connect to Spark for ODBC
Connect to Spark for Databricks SQL
Test the data source connection
Supported data types

Spark packages

There are several Soda Library install packages for Spark.

Package Description
soda-spark-df Enables you to pass dataframe objects into Soda scans programatically, after you have associated the temporary tables to DataFrames via the Spark API.
- For use with programmatic Soda scans, only.
- Supports Delta Lake Tables on Databricks.
- Use for Spark DataFrames on Databricks.
soda-spark[hive] A package you add to soda-spark-df if you are using Apache Hive.
soda-spark[odbc] A package you add to soda-spark-df if you are using ODBC.
soda-spark[databricks] A package you use to install Soda Library for Databricks SQL on the Databricks Lakehouse Platform.

Connect to Spark DataFrames

  • For use with programmatic Soda scans, only.
  • Unlike other data sources, Soda Library for SparkDF does not require a configuration YAML file to run scans against Spark DataFrames.

A Spark cluster contains a distributed collection of data. Spark DataFrames are distributed collections of data that are organized into named columns, much like a table in a database, and which are stored in-memory in a cluster.

To make a DataFrame available to Soda Library to run scans against, you must use a driver program like PySpark and the Spark API to link DataFrames to individual, named, temporary tables in the cluster. You pass this information into a Soda scan programatically. You can also pass Soda Cloud connection details programmatically; see below.

  1. If you are not installing Soda Library Spark DataFrames on a cluster, skip to step 2. To install Soda Library Spark DataFrames on a cluster, such as a Kubernetes cluster or a Databricks cluster, install libsasl2-dev before installing soda-spark-df. For Ubuntu users, install libsasl2-dev using the following command:
    sh sudo apt-get -y install unixodbc-dev libsasl2-dev gcc python-dev
    
  2. If you are not using Spark with Hive or ODBC, skip to step 3. Otherwise, install the separate dependencies as needed, and configure connection details for each dependency; see below.
    • for Hive, use: pip install -i https://pypi.cloud.soda.io soda-spark[hive] and configure
    • for ODBC, use: pip install -i https://pypi.cloud.soda.io soda-spark[odbc] and configure
  3. Install soda-spark-df (see Install Soda Library) and confirm that you have completed the following.
    • set up a a Spark session
      spark_session: SparkSession = ...user-defined-way-to-create-the-spark-session...
      
    • confirmed that your Spark cluster contains one or more DataFrames
      df = ...user-defined-way-to-build-the-dataframe...
      
  4. Use the Spark API to link the name of a temporary table to a DataFrame. In this example, the name of the table is customers.
    db.createOrReplaceTempView('customers')
    
  5. Use the Spark API to link a DataFrame to the name of each temporary table against which you wish to run Soda scans. Refer to PySpark documentation.
  6. Define a programmatic scan for the data in the DataFrames, and include one extra method to pass all the DataFrames to Soda Library: add_spark_session(self, spark_session, data_source_name: str). The default value for data_source_name is "spark_df"; best practice dictates that you customize the name to your implementation. Refer to the example below.
spark_session = ...your_spark_session...
df1.createOrReplaceTempView("TABLE_ONE")
df2.createOrReplaceTempView("TABLE_TWO")
...

scan = Scan()
scan.add_spark_session(spark_session, data_source_name="orders")
scan.set_data_source_name("orders")
scan.set_scan_definition_name('YOUR_SCHEDULE_NAME')
... all other scan methods in the standard programmatic scan ...


If you are using reference checks with a Spark or Databricks data source to validate the existence of values in two datasets within the same schema, you must first convert your DataFrames into temp views to add them to the Spark session, as in the following example.

# after adding your Spark session to the scan
df.createOrReplaceTempView("df")
df2.createOrReplaceTempView("df2")


Connect Soda Library for SparkDF to Soda Cloud

Unlike other data sources, Soda Library does not require a configuration YAML file to run scans against Spark DataFrames. It is for use with programmatic Soda scans, only.

Therefore, to connect to Soda Cloud, include the Soda Cloud API keys in your programmatic scan using either add_configuration_yaml_file(file_path) or scan.add_configuration_yaml_str(config_string) as in the example below.

from pyspark.sql import SparkSession, types
from soda.scan import Scan

spark_session = SparkSession.builder.master("local").appName("test").getOrCreate()
df = spark_session.createDataFrame(
    data=[{"id": "1", "name": "John Frome"}],
    schema=types.StructType(
        [types.StructField("id", types.StringType()), types.StructField("name", types.StringType())]
    ),
)
df.createOrReplaceTempView("users")

scan = Scan()
scan.set_verbose(True)
scan.set_scan_definition_name("YOUR_SCHEDULE_NAME")
scan.set_data_source_name("customers")
scan.add_configuration_yaml_str(
    """
soda_cloud:
  # use cloud.soda.io for EU region; use cloud.us.soda.io for USA region
  host: cloud.soda.io
  api_key_id: "[key]"
  api_key_secret: "[secret]"
"""
)
scan.add_spark_session(spark_session, data_source_name="customers")
scan.add_sodacl_yaml_file(file_path="sodacl_spark_df/checks.yml")
# ... all other scan methods in the standard programmatic scan ...
scan.execute()

# print(scan.get_all_checks_text())
print(scan.get_logs_text())
# scan.assert_no_checks_fail()

Use Soda Library with Spark DataFrames on Databricks

Use the soda-spark-df package to connect to Databricks using a Notebook.
🎥 Watch a video that demonstrates how to add Soda to your Databricks pipeline: https://go.soda.io/soda-databricks-video

  1. Follow steps 1-2 in the instructions to install soda-spark-df.
  2. Reference the following Notebook example to connect to Databricks.
# import Scan from Soda Library
from soda.scan import Scan
# Create a Spark DataFrame, or use the Spark API to read data and create a DataFrame
df = spark.createDataFrame([(1, "a"), (2, "b")], ("id", "name"))
# Create a view that SodaCL uses as a dataset
df.createOrReplaceTempView("my_df")
# Create a Scan object, set a scan definition, and attach a Spark session
scan = Scan()
scan.set_scan_definition_name("test")
scan.set_data_source_name("customers")
scan.add_spark_session(spark, data_source_name="customers")
# Define checks for datasets 
checks  ="""
checks for my_df:
  - row_count > 0 
"""
# If you defined checks in a file accessible via Spark, you can use the scan.add_sodacl_yaml_file method to retrieve the checks
scan.add_sodacl_yaml_str(checks)
# Optionally, add a configuration file with Soda Cloud credentials 
# config = """
# soda_cloud:
#   host: cloud.soda.io
#   api_key_id: xyz
#   api_key_secret: xyz
# """
# scan.add_configuration_yaml_str(config)

# Execute a scan
scan.execute()
# Check the Scan object for methods to inspect the scan result; the following prints all logs to console
print(scan.get_logs_text())


If you are using reference checks with a Spark or Databricks data source to validate the existence of values in two datasets within the same schema, you must first convert your DataFrames into temp views to add them to the Spark session, as in the following example.

# after adding your Spark session to the scan
df.createOrReplaceTempView("df")
df2.createOrReplaceTempView("df2")


Connect to Spark for Hive

An addition to soda-spark-df, install and configure the soda-spark[hive] package if you use Apache Hive.

data_source my_datasource_name:
  type: spark
  method: hive
  username: 
  password: 
  host: 
  port: 
  catalog: 
  auth_method: 
Property Required
type required
method required
username required
password required
host required
port required
catalog required
auth_method required


Connect to Spark for ODBC

An addition to soda-spark-df, install and configure the soda-spark[odbc] package if you use ODBC.

data_source my_datasource_name:
  type: spark
  method: odbc
  driver: 
  host: 
  port: 
  token: 
  organization: 
  cluster:
  server_side_parameters: 
Property Required
type required
method required
driver required
host required
port required
token required
organization required
cluster required
server_side_parameters required


Connect to Spark for Databricks SQL

  1. Install soda-spark-df (see above) and soda-spark[databricks] to connect to Databricks SQL. Refer to Install Soda Library for details.
    pip install -i https://pypi.cloud.soda.io soda-spark[databricks]
    
  2. If you have not done so already, install databricks-sql-connector. Refer to Databricks documentation for details.
  3. Configure the data source connection in your configuration.yml file as per the following example.
data_source my_datasource_name:
  type: spark
  method: databricks
  catalog: samples
  schema: nyctaxi
  host: hostname_from_Databricks_SQL_settings
  http_path: http_path_from_Databricks_SQL_settings
  token: my_access_token
Property Required
type required
method required
catalog required
schema required
host required
token required


Test the data source connection

To confirm that you have correctly configured the connection details for the data source(s) in your configuration YAML file, use the test-connection command. If you wish, add a -V option to the command to returns results in verbose mode in the CLI.

soda test-connection -d my_datasource -c configuration.yml -V

Supported data types

Category Data type
text CHAR, VARCHAR, TEXT
number NUMERIC, BIT, SMALLMONEY, INT, MONEY, FLOAT, REAL
time DATE, TIME, DATETIME, DATETIMEOFFSET

Not supported:

  • BIGINT
  • BOOLEAN
  • DECIMAL
  • TIMESTAMP




Was this documentation helpful?

What could we do to improve this page?

Documentation always applies to the latest version of Soda products
Last modified on 20-Nov-24