Load an RDF graph from a file or directory with a Spark DataFrame as underlying datastructure.

Load an RDF graph from a file or directory with a Spark DataFrame as underlying datastructure.
The path can also contain multiple paths and even wildcards, e.g.
"/my/dir1,/my/paths/part-00[0-5]*,/another/dir,/a/specific/file"

session

the Spark session

path

the absolute path of the file

minPartitions

min number of partitions for Hadoop RDDs (SparkContext.defaultMinPartitions)

Load an RDF graph from from from a file or directory with a Spark Dataset as underlying datastructure.

Load an RDF graph from from from a file or directory with a Spark Dataset as underlying datastructure.
The path can also contain multiple paths and even wildcards, e.g.
"/my/dir1,/my/paths/part-00[0-5]*,/another/dir,/a/specific/file"

Load an RDF graph from from multiple files or directories with a Spark Dataset as underlying datastructure.

Load an RDF graph from from multiple files or directories with a Spark Dataset as underlying datastructure.
The path can also contain multiple paths and even wildcards, e.g.
"/my/dir1,/my/paths/part-00[0-5]*,/another/dir,/a/specific/file"

Load an RDF graph from a file or directory with a Spark RDD as underlying datastructure.

Load an RDF graph from a file or directory with a Spark RDD as underlying datastructure.
The path can also contain multiple paths and even wildcards, e.g.
"/my/dir1,/my/paths/part-00[0-5]*,/another/dir,/a/specific/file"

session

the Spark session

path

the files

minPartitions

min number of partitions for Hadoop RDDs (SparkContext.defaultMinPartitions)