[Zope] Advice on Blob Storage?

Tom Russell tomr at fastmail.net
Thu Sep 24 08:52:42 CEST 2015

On Mon, 21 Sep 12:24:45 PM Michael McFadden wrote:
> This may be more of a zodb / relstorage question - I hope it's ok to ask
> on the Zope list.
> I'm seeing behavior using relstorage and blobs that I didn't expect:
>     If I upload a large file, say 2 gigs, I am noticing that our SQL
> database also grows by 2 Gigs, along with the blob storage.
>    After a pack, the space is reclaimed on the SQL side, and everyone is
> happy.
>    FWIW - it's videos that are doing this.
> I am pretty sure it's the undo log that's growing, based on the fact
> that a pack reclaims the space.
> Can this behavior be turned off for a specific field or content type?
> So undo logs are preserved for everything BUT this monster of a content
> type?
> Seems strange to do this tho.
> Are there other alternatives, like calling .pack() directly on the
> field's storage after it's set?
> Our problem is that our sql database grows to a huge size between our
> weekly packs, and backups of the sql dumps are becoming unmanageable.
> Our blob backups are ready to deal with this kind of size, but not the
> sql backups.
> ----------
> Going deeper down the rabbit hole, although I don't think it's relevant,
> is the fact that I hacked and replaced the storage class for the field.
> Instead of using AnnotationStorage - which I found used as default for
> ImageField - I intercept the data during storage.set(), ship it out to a
> separate storage facility, and replace the data with a happy message
> "This is not where your data is" which is then written to the blobs.
> It works just great - keeping our blob storage growth from going
> crazy.    If you try to 'download' the file from Plone, you'll get the
> text file with the happy message.
> Now that I've been shown that the Blob Storage is functioning just fine,
> but the SQL storage size is going off the charts, I hope I'm not back at
> square one.
> The goal is to allow users to think they are uploading 4Gb videos into
> Plone, when under the covers, we're actually shipping the video files
> off to some fancy off-site storage. (Akamai)  So we don't have to store
> them and back them up on-site, and our blob directories remain
> manageable in size.
> The storage hack can be seen here:
> https://github.com/RadioFreeAsia/rfa.kaltura/blob/master/rfa/kaltura/storage
> /storage.py
> I'm not proud of it, but it works.


First of all, kudos on your candor and being willing to share your "hack" 

I've been out of the Zope  loop for a while but just thought I'd pony up a 
response since your posting was interesting to me, regardless how out of touch 
w/ reality my response might be. And being out of the loop, I don't have to 
worry any more about looking dumb!

My 1st thought is, why don't you create a content type and store it in the 
ZODB at the time the video is uploaded? The type would include the video 
metadata (vanilla RSS, Dublin Core, etc) and a link to the off-site content. 
Much more helpful than a "not here" message, yeah?

Secondly, I'm wondering why you're using SQL. Is it to interface with legacy 
system(s)? But that's probably just my purist streak talking. :-)

IIRC, there are hooks in Zope like "manage_before_save()", "...after_save", 
etc. This would be ideal, as you could strip the blob from the request before 
doing an insert. Yeah?

Anyway, sorry I can't be more help w/ the specifics of your installation.


More information about the Zope mailing list