home | news | documentation | source | downloads | discussion | projects | license |
Overview Why Clearsilver? Clearsilver Basics HDF Dataset Template Syntax Expressions Macros Functions CGI Kit Config Vars FAQ API Reference API Concepts C API Python API Introduction Perl API Java API Tools odb.py Comparison with PHP, ASP, JSP Comparison with XML/XSLT |
CLASS HDF_DATASET INT:getIntValue(STRING:hdf_var_name,INT:default_value) STRING:getValue(STRING:hdf_var_name,STRING:default_value) setValue(STRING:hdf_var_name,STRING:value) These functions are used to read and write individual data elements to and from the HDF dataset. You must provide a default value of the proper type. readFile(STRING:filename) readString(STRING:hdf_string_data) writeFile(STRING:filename) STRING:writeString() STRING:dump() These functions are used to read and write chunks of HDF data from strings or files. Remember that since extra HDF information such as comments are not kept in the dataset, using readFile() and then writeFile() will basically result in a nicely formatted output file without comments. You can generate HDF output in three different formats or styles. They all read in the same way. writeFile() will dump a nicely formatted file with {} nesting and proper indention. writeString() will create the most compact representation by using {} nesting, but eliminates all unnecessary whitespace. This is generally used for rendering HDF for storage into a database or other small data location. dump() writes out the fully qualified HDF path for every element (i.e. A.B.C=1), and is generally used to make certain configuration files or output dumps easy to read and use. HDF_DATASET:getObj(STRING:hdf_name) This returns a "sub-dataset" within the current dataset located at the given node. HDF_DATASET:top() This returns the root level of the dataset. copy(STRING:name,HDF_DATASET:src_dataset) This will copy the source dataset into the named location in the destination dataset. (i.e. the object you're calling copy on) removeTree(STRING:hdf_path) This deletes and removes an hdf subtree from the current dataset. setSymLink(STRING:hdf_name_src,STRING:hdf_name_dest) This is something like a UNIX symlink. It points the HDF source node name at the HDF destination node name. For example: hdf.setValue("foo","bar") hdf.setSymLink("baz","foo") print hdf.getValue("baz","") # ---> "bar" LIST_of_TUPLES:getAttrs(STRING:hdf_var_name) setAttr(STRING:hdf_node_name,STRING:attr_name,STRING:attr_value) These functions allow you to access the attributes which can be attached to any HDF node. getAttrs returns a list of attributes present on a node in the form [(name1,value1),(name2,value2),...]. setAttr allows you to set an attribute on that node. You should checkout the HDF dataset documentation to get a more thorough description of attributes. HDF_DATASET:child() HDF_DATASET:next() STRING:name() STRING:value() These functions are used for walking the HDF tree. For example, this function will recursively walk an entire tree (although the printout might not be exactly what you expect, because it only contains the local node names) def render_node(a_node): print "%s = %s" % (a_node.name(),a_node.value()) def tree_walk(hdf_node): while hdf_node: render_node(hdf_node) tree_walk(hdf_node.child()) hdf_node = hdf_node.next()
|