[ZODB-Dev] RE: [Zope] Analyzing ZODB objects

Bjorn Stabell bjorn at exoweb.net
Sun Oct 26 20:45:17 EST 2003


> From: Dieter Maurer [mailto:dieter at handshake.de] 
>
> Forget about this approach. It might come with Python 3 or 
> Python 4, but it is unlikely. Python is a high level language 
> hiding memory usage; you want precise information about 
> memory usage. I doubt you will find enough arguments and use 
> cases to get this into Python.
[...]
> Do you really care about the size of objects in memory?
> We no longer live in 1980 when memory has been a scarce resource.

Well, we still run out of memory, and then it's useful to know why. :)
Caring about memory usage is caring about (one kind of) scalability.
I realize I can't change this aspect of Python, and it's not that
critical to know if you can deduce memory usage from pickle sizes (like
you mention).

[...]
> I had a similar problem (the ZODB grew far too fast) and I 
> wanted to understand why.
>
> I extended Zope's "Undo" information to include the 
> transaction size. This allowed me to see precisely which 
> transactions were larger than expected.
> 
> I extended the "fsdump" utility to include the (pickle) sizes 
> of the object records contained in a transaction and to 
> restrict the range of dumped transactions.
> 
> This has been enough to analyse the problem: ZCatalog's 
> Metadata records caused a transaction size to grow from an 
> expected few hundred bytes to about 500 kB.

Any chance of seeing these changes in the core?


[...]

The remainder of my use cases can be currently best supported by using a
debugger + DocFinder, I agree, although I think there's still a need for
an admin tool that lets you easily view and browse (multiple) objects,
independent of if they catalogaware or has an ZMI interface.  I'll think
so more about it.

Thanks for all the help.

-- 
Bjorn



More information about the Zope mailing list