google-pandas-load documentation

https://img.shields.io/pypi/v/google-pandas-load https://img.shields.io/pypi/l/google-pandas-load.svg https://img.shields.io/pypi/pyversions/google-pandas-load.svg https://codecov.io/gh/augustin-barillec/google-pandas-load/branch/main/graph/badge.svg https://pepy.tech/badge/google-pandas-load

Wrapper for transferring data from A to B, where A and B are distinct and chosen between BigQuery, Storage, a local directory and pandas.

Acknowledgements

I am grateful to my employer Easyence for providing me the resources to develop this library and for allowing me to publish it.

Installation

$ pip install google-pandas-load

Quickstart

Set up a loader.

In the following code, the credentials are inferred from the environment. For further information about how to authenticate to Google Cloud Platform with the Google Cloud Client Library for Python, have a look here.

from google_pandas_load import LoaderQuickSetup

gpl = LoaderQuickSetup(
    project_id='pi',
    dataset_name='dn',
    bucket_name='bn',
    bucket_dir_path='tmp',
    local_dir_path='/tmp',
    credentials=None)

Transfer data seamlessly from and to various locations:

Warning

The loader will delete any prior existing data having the same name in any location it will go through or at.

Explanation for this choice and one example can be found here.

Note

If the optional argument bucket_dir_path is not given, data will be stored at the root of the bucket. It is a good practice to specify this argument so that data is stored in a defined bucket directory.

# Populate a dataframe with a query result.
df = gpl.load(
    source='query',
    destination='dataframe',
    query='select 3 as x')

# Apply a python transformation to the data.
df['x'] = 2*df['x']

# Upload the result to BigQuery.
gpl.load(
    source='dataframe',
    destination='dataset',
    dataframe=df,
    data_name='a0')

# Extract the data to Storage.
gpl.load(
    source='dataset',
    destination='bucket',
    data_name='a0')

# Download the data to the local directory.
gpl.load(
    source='bucket',
    destination='local',
    data_name='a0')

Launch simultaneously several load jobs with massive parallelization of the query_to_dataset and dataset_to_bucket steps. This is made possible by BigQuery.

from google_pandas_load import LoadConfig

# Build the load configs.
configs = []
for i in range(100):
    config = LoadConfig(
        source='query',
        destination='local',
        query=f'select {i} as x',
        data_name=f'b{i}')
    configs.append(config)

# Launch all the load jobs at the same time.
gpl.multi_load(configs=configs)

Main features

  • Transfer data seamlessly from and to various locations.

  • Launch several load jobs simultaneously.

  • Massive parallelization of the cloud steps with BigQuery.

Limitation

  • Only simple types can be downloaded or uploaded.

The methods:

can handle more types.

The basic mechanism

This code essentially chains transferring data functions from the Google Cloud Client Library for Python and from pandas.

To download, the following functions are chained:

To upload, the following functions are chained:

Required packages

  • google-cloud-bigquery

  • google-cloud-storage

  • pandas

Table of Contents