File manipulation class

The File Class

class tables.File(filename, mode='r', title='', root_uep='/', filters=None, **kwargs)

The in-memory representation of a PyTables file.

An instance of this class is returned when a PyTables file is opened with the :func`tables.open_file` function. It offers methods to manipulate (create, rename, delete...) nodes and handle their attributes, as well as methods to traverse the object tree. The user entry point to the object tree attached to the HDF5 file is represented in the root_uep attribute. Other attributes are available.

File objects support an Undo/Redo mechanism which can be enabled with the File.enable_undo() method. Once the Undo/Redo mechanism is enabled, explicit marks (with an optional unique name) can be set on the state of the database using the File.mark() method. There are two implicit marks which are always available: the initial mark (0) and the final mark (-1). Both the identifier of a mark and its name can be used in undo and redo operations.

Hierarchy manipulation operations (node creation, movement and removal) and attribute handling operations (setting and deleting) made after a mark can be undone by using the File.undo() method, which returns the database to the state of a past mark. If undo() is not followed by operations that modify the hierarchy or attributes, the File.redo() method can be used to return the database to the state of a future mark. Else, future states of the database are forgotten.

Note that data handling operations can not be undone nor redone by now. Also, hierarchy manipulation operations on nodes that do not support the Undo/Redo mechanism issue an UndoRedoWarning before changing the database.

The Undo/Redo mechanism is persistent between sessions and can only be disabled by calling the File.disable_undo() method.

File objects can also act as context managers when using the with statement introduced in Python 2.5. When exiting a context, the file is automatically closed.

Parameters :

filename : str

The name of the file (supports environment variable expansion). It is suggested that file names have any of the .h5, .hdf or .hdf5 extensions, although this is not mandatory.

mode : str

The mode to open the file. It can be one of the following:

  • ‘r’: Read-only; no data can be modified.
  • ‘w’: Write; a new file is created (an existing file with the same name would be deleted).
  • ‘a’: Append; an existing file is opened for reading and writing, and if the file does not exist it is created.
  • ‘r+’: It is similar to ‘a’, but the file must already exist.

title : str

If the file is to be created, a TITLE string attribute will be set on the root group with the given value. Otherwise, the title will be read from disk, and this will not have any effect.

root_uep : str

The root User Entry Point. This is a group in the HDF5 hierarchy which will be taken as the starting point to create the object tree. It can be whatever existing group in the file, named by its HDF5 path. If it does not exist, an HDF5ExtError is issued. Use this if you do not want to build the entire object tree, but rather only a subtree of it.

Changed in version 3.0: The rootUEP parameter has been renamed into root_uep.

filters : Filters

An instance of the Filters (see The Filters class) class that provides information about the desired I/O filters applicable to the leaves that hang directly from the root group, unless other filter properties are specified for these leaves. Besides, if you do not specify filter properties for child groups, they will inherit these ones, which will in turn propagate to child nodes.

Notes

In addition, it recognizes the (lowercase) names of parameters present in tables/parameters.py as additional keyword arguments. See PyTables parameter files for a detailed info on the supported parameters.

File attributes

filename

The name of the opened file.

format_version

The PyTables version number of this file.

isopen

True if the underlying file is open, false otherwise.

mode

The mode in which the file was opened.

root

The root of the object tree hierarchy (a Group instance).

root_uep

The UEP (user entry point) group name in the file (see the open_file() function).

Changed in version 3.0: The rootUEP attribute has been renamed into root_uep.

File properties

File.title

The title of the root group in the file.

File.filters

Default filter properties for the root group (see The Filters class).

File.open_count

The number of times this file has been opened currently.

File methods - file handling

File.close()

Flush all the alive leaves in object tree and close the file.

File.copy_file(dstfilename, overwrite=False, **kwargs)

Copy the contents of this file to dstfilename.

Parameters :

dstfilename : str

A path string indicating the name of the destination file. If it already exists, the copy will fail with an IOError, unless the overwrite argument is true.

overwrite : bool, optional

If true, the destination file will be overwritten if it already exists. In this case, the destination file must be closed, or errors will occur. Defaults to False.

kwargs :

Additional keyword arguments discussed below.

Notes

Additional keyword arguments may be passed to customize the copying process. For instance, title and filters may be changed, user attributes may be or may not be copied, data may be sub-sampled, stats may be collected, etc. Arguments unknown to nodes are simply ignored. Check the documentation for copying operations of nodes to see which options they support.

In addition, it recognizes the names of parameters present in tables/parameters.py as additional keyword arguments. See PyTables parameter files for a detailed info on the supported parameters.

Copying a file usually has the beneficial side effect of creating a more compact and cleaner version of the original file.

File.flush()

Flush all the alive leaves in the object tree.

File.fileno()

Return the underlying OS integer file descriptor.

This is needed for lower-level file interfaces, such as the fcntl module.

File.__enter__()

Enter a context and return the same file.

File.__exit__(*exc_info)

Exit a context and close the file.

File.__str__()

Return a short string representation of the object tree.

Examples

>>> f = tables.open_file('data/test.h5')
>>> print f
data/test.h5 (File) 'Table Benchmark'
Last modif.: 'Mon Sep 20 12:40:47 2004'
Object Tree:
/ (Group) 'Table Benchmark'
/tuple0 (Table(100,)) 'This is the table title'
/group0 (Group) ''
/group0/tuple1 (Table(100,)) 'This is the table title'
/group0/group1 (Group) ''
/group0/group1/tuple2 (Table(100,)) 'This is the table title'
/group0/group1/group2 (Group) ''
File.__repr__()

Return a detailed string representation of the object tree.

File.get_file_image()

Retrieves an in-memory image of an existing, open HDF5 file.

Note

this method requires HDF5 >= 1.8.9.

New in version 3.0.

File.get_filesize()

Returns the size of an HDF5 file.

The returned size is that of the entire file, as opposed to only the HDF5 portion of the file. I.e., size includes the user block, if any, the HDF5 portion of the file, and any data that may have been appended beyond the data written through the HDF5 Library.

New in version 3.0.

File.get_userblock_size()

Retrieves the size of a user block.

New in version 3.0.

File methods - hierarchy manipulation

File.copy_children(srcgroup, dstgroup, overwrite=False, recursive=False, createparents=False, **kwargs)

Copy the children of a group into another group.

Parameters :

srcgroup : str

The group to copy from.

dstgroup : str

The destination group.

overwrite : bool, optional

If True, the destination group will be overwritten if it already exists. Defaults to False.

recursive : bool, optional

If True, all descendant nodes of srcgroup are recursively copied. Defaults to False.

createparents : bool, optional

If True, any necessary parents of dstgroup will be created. Defaults to False.

kwargs : dict

Additional keyword arguments can be used to customize the copying process. See the documentation of Group._f_copy_children() for a description of those arguments.

File.copy_node(where, newparent=None, newname=None, name=None, overwrite=False, recursive=False, createparents=False, **kwargs)

Copy the node specified by where and name to newparent/newname.

Parameters :

where : str

These arguments work as in File.get_node(), referencing the node to be acted upon.

newparent : str or Group

The destination group that the node will be copied into (a path name or a Group instance). If not specified or None, the current parent group is chosen as the new parent.

newname : str

The name to be assigned to the new copy in its destination (a string). If it is not specified or None, the current name is chosen as the new name.

name : str

These arguments work as in File.get_node(), referencing the node to be acted upon.

overwrite : bool, optional

If True, the destination group will be overwritten if it already exists. Defaults to False.

recursive : bool, optional

If True, all descendant nodes of srcgroup are recursively copied. Defaults to False.

createparents : bool, optional

If True, any necessary parents of dstgroup will be created. Defaults to False.

kwargs :

Additional keyword arguments can be used to customize the copying process. See the documentation of Group._f_copy() for a description of those arguments.

Returns :

node : Node

The newly created copy of the source node (i.e. the destination node). See Node._f_copy() for further details on the semantics of copying nodes.

File.create_array(where, name, obj=None, title='', byteorder=None, createparents=False, atom=None, shape=None)

Create a new array.

Parameters :

where : str or Group

The parent group from which the new array will hang. It can be a path string (for example ‘/level1/leaf5’), or a Group instance (see The Group class).

name : str

The name of the new array

obj : python object

The array or scalar to be saved. Accepted types are NumPy arrays and scalars, as well as native Python sequences and scalars, provided that values are regular (i.e. they are not like [[1,2],2]) and homogeneous (i.e. all the elements are of the same type).

Also, objects that have some of their dimensions equal to 0 are not supported (use an EArray node (see The EArray class) if you want to store an array with one of its dimensions equal to 0).

Changed in version 3.0: The Object parameter has been renamed into *obj.*

title : str

A description for this node (it sets the TITLE HDF5 attribute on disk).

byteorder : str

The byteorder of the data on disk, specified as ‘little’ or ‘big’. If this is not specified, the byteorder is that of the given object.

createparents : bool, optional

Whether to create the needed groups for the parent path to exist (not done by default).

atom : Atom

An Atom (see The Atom class and its descendants) instance representing the type and shape of the atomic objects to be saved.

New in version 3.0.

shape : tuple of ints

The shape of the stored array.

New in version 3.0.

See also

Array
for more information on arrays
create_table
for more information on the rest of parameters
File.create_carray(where, name, atom=None, shape=None, title='', filters=None, chunkshape=None, byteorder=None, createparents=False, obj=None)

Create a new chunked array.

Parameters :

where : str or Group

The parent group from which the new array will hang. It can be a path string (for example ‘/level1/leaf5’), or a Group instance (see The Group class).

name : str

The name of the new array

atom : Atom

An Atom (see The Atom class and its descendants) instance representing the type and shape of the atomic objects to be saved.

Changed in version 3.0: The atom parameter can be None (default) if obj is provided.

shape : tuple

The shape of the new array.

Changed in version 3.0: The shape parameter can be None (default) if obj is provided.

title : str, optional

A description for this node (it sets the TITLE HDF5 attribute on disk).

filters : Filters, optional

An instance of the Filters class (see The Filters class) that provides information about the desired I/O filters to be applied during the life of this object.

chunkshape : tuple or number or None, optional

The shape of the data chunk to be read or written in a single HDF5 I/O operation. Filters are applied to those chunks of data. The dimensionality of chunkshape must be the same as that of shape. If None, a sensible value is calculated (which is recommended).

byteorder : str, optional

The byteorder of the data on disk, specified as ‘little’ or ‘big’. If this is not specified, the byteorder is that of the given object.

createparents : bool, optional

Whether to create the needed groups for the parent path to exist (not done by default).

obj : python object

The array or scalar to be saved. Accepted types are NumPy arrays and scalars, as well as native Python sequences and scalars, provided that values are regular (i.e. they are not like [[1,2],2]) and homogeneous (i.e. all the elements are of the same type).

Also, objects that have some of their dimensions equal to 0 are not supported. Please use an EArray node (see The EArray class) if you want to store an array with one of its dimensions equal to 0.

The obj parameter is optional and it can be provided in alternative to the atom and shape parameters. If both obj and atom and/or shape are provided they must be consistent with each other.

New in version 3.0.

See also

CArray
for more information on chunked arrays
File.create_earray(where, name, atom=None, shape=None, title='', filters=None, expectedrows=1000, chunkshape=None, byteorder=None, createparents=False, obj=None)

Create a new enlargeable array.

Parameters :

where : str or Group

The parent group from which the new array will hang. It can be a path string (for example ‘/level1/leaf5’), or a Group instance (see The Group class).

name : str

The name of the new array

atom : Atom

An Atom (see The Atom class and its descendants) instance representing the type and shape of the atomic objects to be saved.

Changed in version 3.0: The atom parameter can be None (default) if obj is provided.

shape : tuple

The shape of the new array. One (and only one) of the shape dimensions must be 0. The dimension being 0 means that the resulting EArray object can be extended along it. Multiple enlargeable dimensions are not supported right now.

Changed in version 3.0: The shape parameter can be None (default) if obj is provided.

title : str, optional

A description for this node (it sets the TITLE HDF5 attribute on disk).

expectedrows : int, optional

A user estimate about the number of row elements that will be added to the growable dimension in the EArray node. If not provided, the default value is EXPECTED_ROWS_EARRAY (see tables/parameters.py). If you plan to create either a much smaller or a much bigger array try providing a guess; this will optimize the HDF5 B-Tree creation and management process time and the amount of memory used.

chunkshape : tuple, numeric, or None, optional

The shape of the data chunk to be read or written in a single HDF5 I/O operation. Filters are applied to those chunks of data. The dimensionality of chunkshape must be the same as that of shape (beware: no dimension should be 0 this time!). If None, a sensible value is calculated based on the expectedrows parameter (which is recommended).

byteorder : str, optional

The byteorder of the data on disk, specified as ‘little’ or ‘big’. If this is not specified, the byteorder is that of the platform.

createparents : bool, optional

Whether to create the needed groups for the parent path to exist (not done by default).

obj : python object

The array or scalar to be saved. Accepted types are NumPy arrays and scalars, as well as native Python sequences and scalars, provided that values are regular (i.e. they are not like [[1,2],2]) and homogeneous (i.e. all the elements are of the same type).

The obj parameter is optional and it can be provided in alternative to the atom and shape parameters. If both obj and atom and/or shape are provided they must be consistent with each other.

New in version 3.0.

See also

EArray
for more information on enlargeable arrays

Create an external link.

Create an external link to a target node with the given name in where location. target can be a node object in another file or a path string in the form ‘file:/path/to/node‘. If createparents is true, the intermediate groups required for reaching where are created (the default is not doing so).

The returned node is an ExternalLink instance.

File.create_group(where, name, title='', filters=None, createparents=False)

Create a new group.

Parameters :

where : str or Group

The parent group from which the new group will hang. It can be a path string (for example ‘/level1/leaf5’), or a Group instance (see The Group class).

name : str

The name of the new group.

title : str, optional

A description for this node (it sets the TITLE HDF5 attribute on disk).

filters : Filters

An instance of the Filters class (see The Filters class) that provides information about the desired I/O filters applicable to the leaves that hang directly from this new group (unless other filter properties are specified for these leaves). Besides, if you do not specify filter properties for its child groups, they will inherit these ones.

createparents : bool

Whether to create the needed groups for the parent path to exist (not done by default).

See also

Group
for more information on groups

Create a hard link

Create a hard link to a target node with the given name in where location. target can be a node object or a path string. If createparents is true, the intermediate groups required for reaching where are created (the default is not doing so).

The returned node is a regular Group or Leaf instance.

Create a soft link (aka symbolic link) to a target node with the given name in where location. target can be a node object or a path string. If createparents is true, the intermediate groups required for reaching where are created (the default is not doing so).

The returned node is a SoftLink instance. See the SoftLink class (in The SoftLink class) for more information on soft links.

File.create_table(where, name, description=None, title='', filters=None, expectedrows=10000, chunkshape=None, byteorder=None, createparents=False, obj=None)

Create a new table with the given name in where location.

Parameters :

where : str or Group

The parent group from which the new table will hang. It can be a path string (for example ‘/level1/leaf5’), or a Group instance (see The Group class).

name : str

The name of the new table.

description : Description

This is an object that describes the table, i.e. how many columns it has, their names, types, shapes, etc. It can be any of the following:

  • A user-defined class: This should inherit from the IsDescription class (see The IsDescription class) where table fields are specified.
  • A dictionary: For example, when you do not know beforehand which structure your table will have).
  • A Description instance: You can use the description attribute of another table to create a new one with the same structure.
  • A NumPy dtype: A completely general structured NumPy dtype.
  • A NumPy (structured) array instance: The dtype of this structured array will be used as the description. Also, in case the array has actual data, it will be injected into the newly created table.

Changed in version 3.0: The description parameter can be None (default) if obj is provided. In that case the structure of the table is deduced by obj.

title : str

A description for this node (it sets the TITLE HDF5 attribute on disk).

filters : Filters

An instance of the Filters class (see The Filters class) that provides information about the desired I/O filters to be applied during the life of this object.

expectedrows : int

A user estimate of the number of records that will be in the table. If not provided, the default value is EXPECTED_ROWS_TABLE (see tables/parameters.py). If you plan to create a bigger table try providing a guess; this will optimize the HDF5 B-Tree creation and management process time and memory used.

chunkshape :

The shape of the data chunk to be read or written in a single HDF5 I/O operation. Filters are applied to those chunks of data. The rank of the chunkshape for tables must be 1. If None, a sensible value is calculated based on the expectedrows parameter (which is recommended).

byteorder : str

The byteorder of data on disk, specified as ‘little’ or ‘big’. If this is not specified, the byteorder is that of the platform, unless you passed an array as the description, in which case its byteorder will be used.

createparents : bool

Whether to create the needed groups for the parent path to exist (not done by default).

obj : python object

The recarray to be saved. Accepted types are NumPy record arrays, as well as native Python sequences convertible to numpy record arrays.

The obj parameter is optional and it can be provided in alternative to the description parameter. If both obj and description are provided they must be consistent with each other.

New in version 3.0.

See also

Table
for more information on tables
File.create_vlarray(where, name, atom=None, title='', filters=None, expectedrows=None, chunkshape=None, byteorder=None, createparents=False, obj=None)

Create a new variable-length array.

Parameters :

where : str or Group

The parent group from which the new array will hang. It can be a path string (for example ‘/level1/leaf5’), or a Group instance (see The Group class).

name : str

The name of the new array

atom : Atom

An Atom (see The Atom class and its descendants) instance representing the type and shape of the atomic objects to be saved.

Changed in version 3.0: The atom parameter can be None (default) if obj is provided.

title : str, optional

A description for this node (it sets the TITLE HDF5 attribute on disk).

filters : Filters

An instance of the Filters class (see The Filters class) that provides information about the desired I/O filters to be applied during the life of this object.

expectedrows : int, optional

A user estimate about the number of row elements that will be added to the growable dimension in the VLArray node. If not provided, the default value is EXPECTED_ROWS_VLARRAY (see tables/parameters.py). If you plan to create either a much smaller or a much bigger VLArray try providing a guess; this will optimize the HDF5 B-Tree creation and management process time and the amount of memory used.

New in version 3.0.

chunkshape : int or tuple of int, optional

The shape of the data chunk to be read or written in a single HDF5 I/O operation. Filters are applied to those chunks of data. The dimensionality of chunkshape must be 1. If None, a sensible value is calculated (which is recommended).

byteorder : str, optional

The byteorder of the data on disk, specified as ‘little’ or ‘big’. If this is not specified, the byteorder is that of the platform.

createparents : bool, optional

Whether to create the needed groups for the parent path to exist (not done by default).

obj : python object

The array or scalar to be saved. Accepted types are NumPy arrays and scalars, as well as native Python sequences and scalars, provided that values are regular (i.e. they are not like [[1,2],2]) and homogeneous (i.e. all the elements are of the same type).

The obj parameter is optional and it can be provided in alternative to the atom parameter. If both obj and atom and are provided they must be consistent with each other.

New in version 3.0.

See also

VLArray
for more informationon variable-length arrays
The expectedsizeinMB parameter has been replaced by expectedrows.
File.move_node(where, newparent=None, newname=None, name=None, overwrite=False, createparents=False)

Move the node specified by where and name to newparent/newname.

Parameters :

where, name : path

These arguments work as in File.get_node(), referencing the node to be acted upon.

newparent :

The destination group the node will be moved into (a path name or a Group instance). If it is not specified or None, the current parent group is chosen as the new parent.

newname :

The new name to be assigned to the node in its destination (a string). If it is not specified or None, the current name is chosen as the new name.

Notes

The other arguments work as in Node._f_move().

File.remove_node(where, name=None, recursive=False)

Remove the object node name under where location.

Parameters :

where, name :

These arguments work as in File.get_node(), referencing the node to be acted upon.

recursive : bool

If not supplied or false, the node will be removed only if it has no children; if it does, a NodeError will be raised. If supplied with a true value, the node and all its descendants will be completely removed.

File.rename_node(where, newname, name=None, overwrite=False)

Change the name of the node specified by where and name to newname.

Parameters :

where, name :

These arguments work as in File.get_node(), referencing the node to be acted upon.

newname : str

The new name to be assigned to the node (a string).

overwrite : bool

Whether to recursively remove a node with the same newname if it already exists (not done by default).

File methods - tree traversal

File.get_node(where, name=None, classname=None)

Get the node under where with the given name.

where can be a Node instance (see The Node class) or a path string leading to a node. If no name is specified, that node is returned.

If a name is specified, this must be a string with the name of a node under where. In this case the where argument can only lead to a Group (see The Group class) instance (else a TypeError is raised). The node called name under the group where is returned.

In both cases, if the node to be returned does not exist, a NoSuchNodeError is raised. Please note that hidden nodes are also considered.

If the classname argument is specified, it must be the name of a class derived from Node. If the node is found but it is not an instance of that class, a NoSuchNodeError is also raised.

File.is_visible_node(path)

Is the node under path visible?

If the node does not exist, a NoSuchNodeError is raised.

File.iter_nodes(where, classname=None)

Iterate over children nodes hanging from where.

Parameters :

where :

This argument works as in File.get_node(), referencing the node to be acted upon.

classname :

If the name of a class derived from Node (see The Node class) is supplied, only instances of that class (or subclasses of it) will be returned.

Notes

The returned nodes are alphanumerically sorted by their name. This is an iterator version of File.list_nodes().

File.list_nodes(where, classname=None)

Return a list with children nodes hanging from where.

This is a list-returning version of File.iter_nodes().

File.walk_groups(where='/')

Recursively iterate over groups (not leaves) hanging from where.

The where group itself is listed first (preorder), then each of its child groups (following an alphanumerical order) is also traversed, following the same procedure. If where is not supplied, the root group is used.

The where argument can be a path string or a Group instance (see The Group class).

File.walk_nodes(where='/', classname=None)

Recursively iterate over nodes hanging from where.

Parameters :

where : str or Group, optional

If supplied, the iteration starts from (and includes) this group. It can be a path string or a Group instance (see The Group class).

classname :

If the name of a class derived from Node (see The Group class) is supplied, only instances of that class (or subclasses of it) will be returned.

Notes

This version iterates over the leaves in the same group in order to avoid having a list referencing to them and thus, preventing the LRU cache to remove them after their use.

Examples

# Recursively print all the nodes hanging from '/detector'.
print "Nodes hanging from group '/detector':"
for node in h5file.walk_nodes('/detector', classname='EArray'):
    print node
File.__contains__(path)

Is there a node with that path?

Returns True if the file has a node with the given path (a string), False otherwise.

File.__iter__()

Recursively iterate over the nodes in the tree.

This is equivalent to calling File.walk_nodes() with no arguments.

Examples

# Recursively list all the nodes in the object tree.
h5file = tables.open_file('vlarray1.h5')
print "All nodes in the object tree:"
for node in h5file:
    print node

File methods - Undo/Redo support

File.disable_undo()

Disable the Undo/Redo mechanism.

Disabling the Undo/Redo mechanism leaves the database in the current state and forgets past and future database states. This makes File.mark(), File.undo(), File.redo() and other methods fail with an UndoRedoError.

Calling this method when the Undo/Redo mechanism is already disabled raises an UndoRedoError.

File.enable_undo(filters=Filters(complevel=1, complib='zlib', shuffle=True, fletcher32=False))

Enable the Undo/Redo mechanism.

This operation prepares the database for undoing and redoing modifications in the node hierarchy. This allows File.mark(), File.undo(), File.redo() and other methods to be called.

The filters argument, when specified, must be an instance of class Filters (see The Filters class) and is meant for setting the compression values for the action log. The default is having compression enabled, as the gains in terms of space can be considerable. You may want to disable compression if you want maximum speed for Undo/Redo operations.

Calling this method when the Undo/Redo mechanism is already enabled raises an UndoRedoError.

File.get_current_mark()

Get the identifier of the current mark.

Returns the identifier of the current mark. This can be used to know the state of a database after an application crash, or to get the identifier of the initial implicit mark after a call to File.enable_undo().

This method can only be called when the Undo/Redo mechanism has been enabled. Otherwise, an UndoRedoError is raised.

File.goto(mark)

Go to a specific mark of the database.

Returns the database to the state associated with the specified mark. Both the identifier of a mark and its name can be used.

This method can only be called when the Undo/Redo mechanism has been enabled. Otherwise, an UndoRedoError is raised.

File.is_undo_enabled()

Is the Undo/Redo mechanism enabled?

Returns True if the Undo/Redo mechanism has been enabled for this file, False otherwise. Please note that this mechanism is persistent, so a newly opened PyTables file may already have Undo/Redo support enabled.

File.mark(name=None)

Mark the state of the database.

Creates a mark for the current state of the database. A unique (and immutable) identifier for the mark is returned. An optional name (a string) can be assigned to the mark. Both the identifier of a mark and its name can be used in File.undo() and File.redo() operations. When the name has already been used for another mark, an UndoRedoError is raised.

This method can only be called when the Undo/Redo mechanism has been enabled. Otherwise, an UndoRedoError is raised.

File.redo(mark=None)

Go to a future state of the database.

Returns the database to the state associated with the specified mark. Both the identifier of a mark and its name can be used. If the mark is omitted, the next created mark is used. If there are no future marks, or the specified mark is not newer than the current one, an UndoRedoError is raised.

This method can only be called when the Undo/Redo mechanism has been enabled. Otherwise, an UndoRedoError is raised.

File.undo(mark=None)

Go to a past state of the database.

Returns the database to the state associated with the specified mark. Both the identifier of a mark and its name can be used. If the mark is omitted, the last created mark is used. If there are no past marks, or the specified mark is not older than the current one, an UndoRedoError is raised.

This method can only be called when the Undo/Redo mechanism has been enabled. Otherwise, an UndoRedoError is raised.

File methods - attribute handling

File.copy_node_attrs(where, dstnode, name=None)

Copy PyTables attributes from one node to another.

Parameters :

where, name :

These arguments work as in File.get_node(), referencing the node to be acted upon.

dstnode :

The destination node where the attributes will be copied to. It can be a path string or a Node instance (see The Node class).

File.del_node_attr(where, attrname, name=None)

Delete a PyTables attribute from the given node.

Parameters :

where, name :

These arguments work as in File.get_node(), referencing the node to be acted upon.

attrname :

The name of the attribute to delete. If the named attribute does not exist, an AttributeError is raised.

File.get_node_attr(where, attrname, name=None)

Get a PyTables attribute from the given node.

Parameters :

where, name :

These arguments work as in File.get_node(), referencing the node to be acted upon.

attrname :

The name of the attribute to retrieve. If the named attribute does not exist, an AttributeError is raised.

File.set_node_attr(where, attrname, attrvalue, name=None)

Set a PyTables attribute for the given node.

Parameters :

where, name :

These arguments work as in File.get_node(), referencing the node to be acted upon.

attrname :

The name of the attribute to set.

attrvalue :

The value of the attribute to set. Any kind of Python object (like strings, ints, floats, lists, tuples, dicts, small NumPy objects ...) can be stored as an attribute. However, if necessary, pickle is automatically used so as to serialize objects that you might want to save. See the AttributeSet class for details.

Notes

If the node already has a large number of attributes, a PerformanceWarning is issued.