Reading/writing to storage
Terality uses the same methods as pandas to load data, such as
read_parquetand similar. Example:
import terality as te
# Load all parquet files at this S3 location
df = te.read_parquet("s3://my-datasets/path/to/objects/")
# Load a CSV file from disk
df = te.read_csv("/path/to/my/data.csv")
You can import data just as you would do using pandas, for example using a
read_parqueton your local file or your cloud storage (such as AWS S3). You can find the currently supported functions in the Data formats section.
Do not hesitate to contact us if you want us to implement any other read function.
If you're done working on your DataFrame for the moment, or it's still too big to be held in memory on your computer, you may want to download and save it back on your computer's drive/cloud storage. To do this, you can simply use the same API as pandas:
# Forinstance, for AWS S3 and parquet: