pycldf.dataset
The core object of the API, bundling most access to CLDF data, is
the pycldf.Dataset
. In the following we’ll describe its
attributes and methods, bundled into thematic groups.
Dataset initialization
- class pycldf.dataset.Dataset(tablegroup)[source]
API to access a CLDF dataset.
- Parameters:
tablegroup (
csvw.metadata.TableGroup
) –
- __init__(tablegroup)[source]
A
Dataset
is initialized passing a TableGroup. For convenience methods to get such a TableGroup instance, see the factory methods- Parameters:
tablegroup (
csvw.metadata.TableGroup
) –
- classmethod from_data(fname)[source]
Initialize a
Dataset
from a single CLDF data file.See https://github.com/cldf/cldf#metadata-free-conformance
- Return type:
- Parameters:
fname (
typing.Union
[str
,pathlib.Path
]) –
- classmethod from_metadata(fname)[source]
Initialize a
Dataset
with the metadata found at fname.- Parameters:
fname (
typing.Union
[str
,pathlib.Path
]) – A URL (str) or a local path (str or pathlib.Path). If fname points to a directory, the default metadata for the respective module will be read.- Return type:
Accessing dataset metadata
- class pycldf.Dataset(tablegroup)[source]
API to access a CLDF dataset.
- Parameters:
tablegroup (
csvw.metadata.TableGroup
) –
- property bibname: str
- Returns:
Filename of the sources BibTeX file.
- property bibpath: Union[str, Path]
- Returns:
Location of the sources BibTeX file. Either a URL (str) or a local path (pathlib.Path).
- property directory: Union[str, Path]
- Returns:
The location of the metadata file. Either a local directory as pathlib.Path or a URL as str.
- property module: str
- Returns:
The name of the CLDF module of the dataset.
- property properties: dict
- Returns:
Common properties of the CSVW TableGroup of the dataset.
Accessing schema objects: components, tables, columns, etc.
Similar to capability checks in programming languages that use
duck typing, it is often necessary
to access a datasets schema, i.e. its tables and columns to figure out whether
the dataset fits a certain purpose. This is supported via a dict-like interface provided
by pycldf.Dataset
, where the keys are table specifiers or pairs (table specifier, column specifier).
A table specifier can be a table’s component name or its url, a column specifier can be a column
name or its propertyUrl.
check existence with in:
if 'ValueTable' in dataset: pass if ('ValueTable', 'Language_ID') in dataset: pass
retrieve a schema object with item access:
table = dataset['ValueTable'] column = dataset['ValueTable', 'Language_ID']
retrieve a schema object or a default with .get:
table_or_none = dataset.get('ValueTableX') column_or_none = dataset.get(('ValueTable', 'Language_ID'))
remove a schema object with del:
del dataset['ValueTable', 'Language_ID'] del dataset['ValueTable']
Note: Adding schema objects is not supported via key assignment, but with a set of specialized methods described in Editing metadata and schema.
- class pycldf.Dataset(tablegroup)[source]
API to access a CLDF dataset.
- Parameters:
tablegroup (
csvw.metadata.TableGroup
) –
- __contains__(item)[source]
Check whether a dataset specifies a table or column.
- Parameters:
item – See
__getitem__()
- Return type:
bool
- __getitem__(item)[source]
Access to tables and columns.
If a pair (table-spec, column-spec) is passed as item, a Column will be returned, otherwise item is assumed to be a table-spec.
A table-spec may be :rtype:
typing.Union
[csvw.metadata.Table
,csvw.metadata.Column
]a CLDF ontology URI matching the dc:conformsTo property of a table
the local name of a CLDF ontology URI, where the complete URI matches the the dc:conformsTo property of a table
a filename matching the url property of a table
A column-spec may be
a CLDF ontology URI matching the propertyUrl of a column
the local name of a CLDF ontology URI, where the complete URI matches the propertyUrl of a column
the name of a column
- Raises:
SchemaError – If no matching table or column is found.
- property column_names: SimpleNamespace
In-direction layer, mapping ontology terms to local column names (or None).
Note that this property is computed each time it is accessed (because the dataset schema may have changed). So when accessing a dataset for reading only, calling code should use readonly_column_names.
- Returns:
an types.SimpleNamespace object, with attributes <object>s for each component <Object>Table defined in the ontology. Each such attribute evaluates to None if the dataset does not contain the component. Otherwise, it’s an types.SimpleNamespace object mapping each property defined in the ontology to None - if no such column is specified in the component - and the local column name if it is.
- property components: Dict[str, Table]
- Returns:
Mapping of component name to table obejcts as defined in the dataset.
- get(item, default=None)[source]
Acts like dict.get.
- Parameters:
item – See
__getitem__()
- Return type:
typing.Union
[csvw.metadata.Table
,csvw.metadata.Column
,None
]
- get_foreign_key_reference(table, column)[source]
Retrieve the reference of a foreign key constraint for the specified column.
- Parameters:
table (
typing.Union
[str
,csvw.metadata.Table
]) – Source table, specified by filename, component name or as Table instance.column (
typing.Union
[str
,csvw.metadata.Column
]) – Source column, specified by column name, CLDF term or as Column instance.
- Return type:
typing.Optional
[typing.Tuple
[csvw.metadata.Table
,csvw.metadata.Column
]]- Returns:
A pair (Table, Column) specifying the reference column - or None.
- readonly_column_names[source]
- Returns:
types.SimpleNamespace with component names as attributes.
- property tables: list
- Returns:
All tables defined in the dataset.
Editing metadata and schema
In many cases, editing the metadata of a dataset is as simple as editing
properties()
, but for the somewhat complex
formatting of provenance data, we provide the shortcut
add_provenance()
.
Likewise, csvw.Table and csvw.Column objects in the dataset’s schema can be edited “in place”, by setting their attributes or adding to/editing their common_props dictionary. Thus, the methods listed below are concerned with adding and removing tables and columns.
- class pycldf.Dataset(tablegroup)[source]
API to access a CLDF dataset.
- Parameters:
tablegroup (
csvw.metadata.TableGroup
) –
- add_columns(table, *cols)[source]
Add columns specified by cols to the table specified by table.
- Parameters:
table (
typing.Union
[str
,csvw.metadata.Table
]) –- Return type:
None
- add_component(component, *cols, **kw)[source]
Add a CLDF component to a dataset.
- Parameters:
component (
typing.Union
[str
,dict
]) – A component specified by name or as dict representing the JSON description of the component.- Return type:
csvw.metadata.Table
- add_foreign_key(foreign_t, foreign_c, primary_t, primary_c=None)[source]
Add a foreign key constraint.
..note:: Composite keys are not supported yet.
- Parameters:
foreign_t (
typing.Union
[str
,csvw.metadata.Table
]) – Table reference for the linking table.foreign_c (
typing.Union
[str
,csvw.metadata.Column
]) – Column reference for the link.primary_t (
typing.Union
[str
,csvw.metadata.Table
]) – Table reference for the linked table.primary_c (
typing.Union
[str
,csvw.metadata.Column
,None
]) – Column reference for the linked column - or None, in which case the primary key of the linked table is assumed.
- add_provenance(**kw)[source]
Add metadata about the dataset’s provenance.
- Parameters:
kw – Key-value pairs, where keys are local names of properties in the PROV ontology for describing entities (see https://www.w3.org/TR/2013/REC-prov-o-20130430/#Entity).
- add_table(url, *cols, **kw)[source]
Add a table description to the Dataset.
- Parameters:
url (
str
) – The url property of the table.cols – Column specifications; anything accepted by
pycldf.dataset.make_column()
.kw – Recognized keywords: - primaryKey: specify the column(s) constituting the primary key of the table.
- Return type:
csvw.metadata.Table
- Returns:
The new table.
- remove_columns(table, *cols)[source]
Remove cols from table’s schema.
Note
Foreign keys pointing to any of the removed columns are removed as well.
- Parameters:
table (
typing.Union
[str
,csvw.metadata.Table
]) –
- remove_table(table)[source]
Removes the table specified by table from the dataset.
- Parameters:
table (
typing.Union
[str
,csvw.metadata.Table
]) –
- rename_column(table, col, name)[source]
Assign a new name to an existing column, cascading this change to foreign keys.
This functionality can be used to change the names of columns added automatically by
Dataset.add_component()
- Parameters:
table (
typing.Union
[str
,csvw.metadata.Table
]) –col (
typing.Union
[str
,csvw.metadata.Column
]) –name (
str
) –
Adding data
The main method to persist data as CLDF dataset is write()
,
which accepts data for all CLDF data files as input. This does not include
sources, though. These must be added using add_sources()
.
Reading data
Reading rows from CLDF data files, honoring the datatypes specified in the schema, is already implemented by csvw. Thus, the simplest way to read data is iterating over the csvw.Table objects. However, this will ignore the semantic layer provided by CLDF. E.g. a CLDF languageReference linking a value to a language will be appear in the dict returned for a row under the local column name. Thus, we provide several more convenient methods to read data.
- class pycldf.Dataset(tablegroup)[source]
API to access a CLDF dataset.
- Parameters:
tablegroup (
csvw.metadata.TableGroup
) –
- get_object(table, id_, cls=None, pk=False)[source]
Get a row of a component as
pycldf.orm.Object
instance.- Return type:
- get_row(table, id_)[source]
Retrieve a row specified by table and CLDF id.
- Raises:
ValueError – If no matching row is found.
- Parameters:
table (
typing.Union
[str
,csvw.metadata.Table
]) –- Return type:
dict
- get_row_url(table, row)[source]
Get a URL associated with a row. Tables can specify associated row URLs by
listing one column with datatype anyURI or
specfying a valueUrl property for their ID column.
For rows representing objects in web applications, this may be the objects URL. For rows representing media files, it may be a URL locating the file on a media server.
- Parameters:
table (
typing.Union
[str
,csvw.metadata.Table
]) – Table specified in a way that __getitem__ understands.row – A row specified by ID or as dict as returned when iterating over a table.
- Return type:
typing.Optional
[str
]- Returns:
a str representing a URL or None.
- iter_rows(table, *cols)[source]
Iterate rows in a table, resolving CLDF property names to local column names.
- Parameters:
table (
typing.Union
[str
,csvw.metadata.Table
]) – Table name.cols – List of CLDF property terms which must be resolved in resulting dict s. I.e. the row dicts will be augmented with copies of the values keyed with CLDF property terms.
- Return type:
typing.Iterator
[dict
]
- objects(table, cls=None)[source]
Read data of a CLDF component as
pycldf.orm.Object
instances.- Parameters:
table (
str
) – table to read, specified as component name.cls (
typing.Optional
[typing.Type
]) –pycldf.orm.Object
subclass to instantiate objects with.
- Return type:
pycldf.util.DictTuple
- Returns:
Writing (meta)data
- class pycldf.Dataset(tablegroup)[source]
API to access a CLDF dataset.
- Parameters:
tablegroup (
csvw.metadata.TableGroup
) –
- write(fname=None, zipped=None, **table_items)[source]
Write metadata, sources and data. Metadata will be written to fname (as interpreted in
pycldf.dataset.Dataset.write_metadata()
); data files will be written to the file specified by csvw.Table.url of the corresponding table, interpreted as path relative todirectory()
.- Parameters:
zipped (
typing.Optional
[typing.Iterable
]) – Iterable listing keys of table_items for which the table file should be zipped.table_items (
typing.List
[dict
]) – Mapping of table specifications to lists of row dicts.fname (
typing.Optional
[pathlib.Path
]) –
- Return type:
pathlib.Path
- Returns:
Path of the CLDF metadata file as written to disk.
- write_metadata(fname=None)[source]
Write the CLDF metadata to a JSON file.
- Fname:
Path of a file to write to, or None to use the default name and write to
directory()
.- Parameters:
fname (
typing.Union
[str
,pathlib.Path
,None
]) –- Return type:
pathlib.Path
- write_sources(zipped=False)[source]
Write the sources BibTeX file to
bibpath()
- Return type:
typing.Optional
[pathlib.Path
]- Returns:
None, if no BibTeX file was written (because no source items were added), pathlib.Path of the written BibTeX file otherwise. Note that this path does not need to exist, because the content may have been added to a zip archive.
- Parameters:
zipped (
bool
) –
Reporting
- class pycldf.Dataset(tablegroup)[source]
API to access a CLDF dataset.
- Parameters:
tablegroup (
csvw.metadata.TableGroup
) –
- stats(exact=False)[source]
Compute summary statistics for the dataset.
- Return type:
typing.List
[typing.Tuple
[str
,str
,int
]]- Returns:
List of triples (table, type, rowcount).
- validate(log=None, validators=None, ontology_path=None)[source]
Validate schema and data of a Dataset:
Make sure the schema follows the CLDF specification and
make sure the data is consistent with the schema.
- Parameters:
log (
typing.Optional
[logging.Logger
]) – a logging.Logger to write ERRORs and WARNINGs to. If None, an exception will be raised at the first problem.validators (
typing.Optional
[typing.List
[typing.Tuple
[str
,str
,callable
]]]) – Custom validation rules, i.e. triples (tablespec, columnspec, attrs validator)
- Raises:
ValueError – if a validation error is encountered (and log is None).
- Return type:
bool
- Returns:
Flag signaling whether schema and data are valid.
Dataset discovery
We provide two functions to make it easier to discover CLDF datasets in the file system. This is useful, e.g., when downloading archived datasets from Zenodo, where it may not be known in advance where in a zip archive the metadata file may reside.
- pycldf.sniff(p)[source]
Determine whether a file contains CLDF metadata.
- Parameters:
p (
pathlib.Path
) – pathlib.Path object for an existing file.- Return type:
bool
- Returns:
True if the file contains CLDF metadata, False otherwise.
- pycldf.iter_datasets(d)[source]
Discover CLDF datasets - by identifying metadata files - in a directory.
- Parameters:
d (
pathlib.Path
) – directory- Return type:
typing.Iterator
[pycldf.dataset.Dataset
]- Returns:
generator of Dataset instances.
Sources
When constructing sources for a CLDF dataset in Python code, you may pass
pycldf.Source
instances into pycldf.Dataset.add_sources()
,
or use pycldf.Reference.__str__()
to format a row’s source value
properly.
Direct access to pycldf.dataset.Sources
is rarely necessary (hence
it is not available as import from pycldf directly), because each
pycldf.Dataset
provides access to an apprpriately initialized instance
in its sources attribute.
- class pycldf.Source(genre, id_, *args, _check_id=True, _lowercase=False, _strip_tex=None, **kw)[source]
A bibliograhical record, specifying a source for some data in a CLDF dataset.
- Parameters:
genre (
str
) –id_ (
str
) –_check_id (
bool
) –_lowercase (
bool
) –_strip_tex (
typing.Optional
[typing.Iterable
[str
]]) –
- class pycldf.Reference(source, desc)[source]
A reference connects a piece of data with a Source, typically adding some citation context often page numbers, or similar.
- Parameters:
source (
pycldf.sources.Source
) –desc (
typing.Optional
[str
]) –
- class pycldf.dataset.Sources[source]
A dict like container for all sources linked to data in a CLDF dataset.
- add(*entries, **kw)[source]
Add a source, either specified as BibTeX string or as
Source
.- Parameters:
entries (
typing.Union
[str
,pycldf.sources.Source
]) –
- expand_refs(refs, **kw)[source]
Turn a list of string references into proper
Reference
instances, looking up sources in self.This can be used from a
pycldf.Dataset
as follows:>>> for row in dataset.iter_rows('ValueTable', 'source'): ... for ref in dataset.sources.expand_refs(row['source']): ... print(ref.source)
- Parameters:
refs (
typing.Iterable
[str
]) –- Return type:
typing.Iterable
[pycldf.sources.Reference
]
Subclasses supporting specific CLDF modules
- class pycldf.Generic(tablegroup)[source]
Generic datasets have no primary table.
- Parameters:
tablegroup (
csvw.metadata.TableGroup
) –