Segmenting Huge Blobs

Apart from the compute infrastructure abstractions, Deltacloud has support for blob storage via the 'buckets' collection. Right now you can create a 'blob' via a HTTP PUT on the 'bucket' in which you want the blob to be created:

PUT /api/buckets/mybucket/12Jul2011blob?format=xml HTTP/1.1
Authorization: Basic AU1J3UB2121Afd1DdyQWxLaTYTmJMNF4zTXBoRGdhMDh2RUw5ZDAN9zVXVa==
Content-Type: text/plain
Content-Length: 128988

... BLOB DATA ...

All the blob-storage cloud providers that deltacloud supports offer a way to do 'multipart' blob uploads - i.e. upload a number of segments or parts individually that will eventually make up a single blob on the provider. We recently had a request for adding this functionality for the Openstack driver. But since Deltacloud is all about cloud abstraction we need to find a 'common' way to implement this across all supported providers. This post is primarily an attempt to organise my notes and help to explore the different approaches that might be taken. I’m leaning towards the term ‘segments’ to describe the individual ‘bits’ or ‘parts’ of the huge blob - hence s/segments/parts/ in the text below:

The native way

The ways in which the various supported providers handle segmented blob uploads:

  1. Openstack Swift
  2. Amazon S3
  3. Microsoft Azure
  4. Google Cloud Storage

Openstack Swift

Possibly the simplest approach - relevant api docs:

  1. Upload ‘segments’. The rules are that their name must have a common ‘prefix’ and this prefix will determine the order in which the segments will be reassembled.

     PUT /v1.1/12345/my_container/large1
     PUT /v1.1/12345/my_container/large2 ...
  2. Finalise - Once you’ve uploaded all the segments, upload a ‘manifest’ which informs the Openstack swift server that you’d like to create a single blob out of all the segments that are named with a particular prefix in a particular container (bucket). The manifest is specified with the ‘X-Object-Manifest’ HTTP header in the request:

     PUT /v1.1/12345/my_container/large_object
     X-Object-Manifest: my_container/large

Amazon S3

AWS uses a 3-step process - relevant API docs initiate, send segments, complete:

  1. Initiate the multipart upload - by using the ‘?uploads’ parameter in the POST URL. This will return a unique UploadId which needs to be supplied when sending any of the segments that will make up the large blob:

     POST /myobject?uploads
     <InitiateMultipartUploadResult xmlns="">
  2. Upload segments - specifying both the partNumber and the uploadId obtained from step 1 above. The response will include an ‘etag’ for the part, which clients must retain for use in completing the upload in step 3:

     PUT /ObjectName?partNumber=PartNumber&uploadId=UploadId HTTP/1.1
     HTTP/1.1 200 OK
     ETag: "b54357faf0632cce46e942fa68356b38"
  3. Finalize the upload - by performing a POST on the object itself and providing the uploadId in the POST URL. Here, the POST body must include a structure detailing the e-tag and part number for each of the uploaded segments:

     POST /ObjectName?uploadId=foo

Microsoft Azure

Azure has 2 types of blob - page and block. Deltacloud support for Azure blob storage applies to page blobs as do the notes here. The approach here is somewhere between that taken by Openstack and that taken by AWS S3 - it is a 2-step process. Relevant API docs - upload segments and upload segment list which is a manifest detailing all the parts and how they go together:

  1. Upload segments - using the ?comp=block URL parameter and specifying the blockId which is a unique Base64 string value. If the named blob doesn’t yet exist it is created after you upload the first segment.

  2. Upload manifest - using the ?comp=blocklist URL parameter and specifying the segments in the request body. The ordering will determine the way the blob is created from the segments. Furthermore, you can also specify whether the given blockId is from the committed (blocks already committed as part of a previous put block list operation and which now form part of an existing blob) or the uncommitted block list (uploaded segments that haven’t yet been committed) - as explained here.

     PUT HTTP/1.1
     Request Body:
     <?xml version="1.0" encoding="utf-8"?>

Google Cloud Storage

Google uses a 2-step process, involving initiating the upload to obtain the uploadId and then sending segments using that ID - relevant API docs

  1. Initiate - specifying the ‘x-goog-resumable’ HTTP header and getting the upload_id from the returned Location header:

     POST / HTTP/1.1
     x-goog-resumable: start
     HTTP 201 Created
  2. Upload segments - supplying the upload_id obtained from step 1 above as a URL parameter. The caveat here though is the size of each segment must be a multiple of 256 kilobytes (except the last). Each uploaded segment must include a ‘Content-Range’ header specifying the relational position of the segment in the blob:

     PUT / HTTP/1.1
     Content-Length: 524288
     Content-Range: bytes 0-524287/*

The Deltacloud way?

Approach 1 - lowest common denominator (aka ‘brute force’)

Here we’d use a 3-step approach - initiate, upload segments, upload manifest/finalize. The hope is that all the approaches above ‘fit’ into this model:

  1. Initiate - using a URL parameter or HTTP header to signify that this is the start of a ‘segmented’ blob:

     PUT /api/buckets/my_bucket/blob_id?segmented
     HTTP 200 OK
  2. Upload Segments - specifying the segment order via an integer/string and supplying the ID returned from step 1 above. Again, these can be supplied through the URI or via HTTP headers. This operation would respond with a segment ID in the response:

     PUT /api/buckets/my_bucket/blob_id?segmented&segmented_blob=id&segment_order=1
     HTTP 202 Accepted
  3. Finalize - complete the upload by specifying the segment order and their ids:

     PUT /api/buckets/my_bucket/blob_id?segmented&segmented_blob=id
     1=foo123, 2=bar345, 3=baz678 etc

The biggest disadvantage of this approach is that it would make the process more complex for certain providers - like Openstack for example. Of course the advantage is that you could use the same generic process for working with multiple blob storage providers. Perhaps approach 2 can help:

Approach 2 - hybrid

We could use the approach above (or some variant - I’m not convinced/happy with the names of the various parameters yet - segmented_blob, segment_order, segmentId etc) and in addition also allow for ‘shortcuts’. So for example, let’s say you need to work with Openstack and you’re already familiar with the Swift process for uploading segmented blobs. Right now, the only thing missing from the Deltacloud API is the ability to pass in the ‘manifest’ header. We could just allow this header to pass through (or perhaps rename and document it as a ‘Deltacloud-Blob-Manifest’ for example). Thus, if you need to use Deltacloud predominantly with Openstack, you’d use this method. And when you need to work across providers you’d fall back to the generic 3-step process.

The cost of this of course is adding complexity - exceptions for particular cloud providers. I’m still undecided on the way forward. Comments and suggestions are very very welcome.

blog comments powered by Disqus
RSS Feed Icon site.xml
RSS Feed Icon tripleo.xml