Tuesday, May 30, 2017

Blender Rendering on a SLURM Cluster

Background

Currently I am working on a movie of our accelerator ring. The movie will be a camera flight along the ring and will consist of more than 5000 frames to realize a sequence of about 2 minutes length. The blender file is more than 800 MB of size and rendering one frame (30 samples per pixel) on my local quad-core i5 machine (I do have as well a CUDA graphics card) takes about 5 hours as I am using effects like depth of view, a couple of lights in the scene, full global illumination, etc.
Fortunately, at work we have a high performance computing cluster (see https://www.gsi.de/en/work/research/it/hpc.htm) with 550 linux machines offering a maximum of 13800 CPU cores, using SLURM as job manager to distribute the jobs among the cores.

Blender@SLURM: not Always Easy

To use the possibilities of our HPC cluster, I started programming a batch script which submits the following blender command line command to the cluster system:
blender -b <myBlenderFile>.blend -o <output path> -F PNG -s <startFrame> -e <endFrame> -a
which needs to be called via a Linux bash shell script. This command line command calls Blender in batch mode ("-b") and renders an animation set ("-a") from frame "-s" to frame "-e". The output path is specified using the "-o" command.


I started in a naive approach and ended up in frustration. Simply by putting the command above in to a shell script (including the "#!/bin/bash" command as first line) led to errors: the cluster refused to accept the script and always bailed out with an error "slurmstepd: error: Exceeded step memory limit at some point."


With some help of our local experts, I found out that this error is due to the memory-management of Blender: Where the SLURM cluster is able of assign memory per CPU-core (in my case Blender wanted 4GB in total), where Blender is not able to handle memory per core, but requires to have exclusive access to a machine's memory.


Another point is the run time of the job(s). As I said, the run time on a 4 core machine is about 5 hours; the SLURM cluster machines have Xeon processors with up to 40 cores, reducing the run time to something like 15 minutes, depending on the actual hardware. However, this adds up to a total run time for the whole movie of 5000 frames to something like 1200 hours, which is 50 days of CPU time. This means, I would like to perform as much parallel processing as possible. On the other hand, our SLURM cluster by standard does not allow more than 6 hours of run time, which means I cannot render more than 24 frames per submitted batch.

SLURM Shell Script

After some tries I ended up with the following script ("renderAnim.sh") to render the movie:
#!/bin/bash
# Task name:
#SBATCH -J <default job name>
#SBATCH --mail-user=<email address for notification>
#SBATCH --mail-type=FAIL
#SBATCH --time=4:00:00
#SBATCH --exclusive
# Working Directory on shared storage
#SBATCH -D <path to working directory>
# Standard and error output
###SBATCH -o %j_%N.out.log
#SBATCH -e %j_%N.err.log
# Application code
if [ $1 -eq $2 ]
then
    PATH=<if you need a local blender installation put PATH here>
    blender -b <myBlendFile>.blend -o ./output/ -F PNG -s $1 -e $2 -a
else
    if [ $2 -lt 5000 ]
    then
        OLD_END=$2
        NEXT_START=$((OLD_END+1))
        NEXT_END=$((OLD_END+10))
        sbatch --job-name=${NEXT_START}_${NEXT_END} renderAnim.sh $NEXT_START $NEXT_END
        PATH=
<if you need a local blender installation put PATH here>
        blender -b 
<myBlendFile>.blend -o ./output/ -F PNG -s $1 -e $2 -a
    fi
fi
This script renders 10 frames per batch job and submits one new batch job if SLURM executes the script. By this mechanism, the script requests always as many jobs as possible until all the free machines are used. The option "#SBATCH --exclusive" requests complete machine for the job to account for the "egoistic" memory management of Blender.

I hope this help with you own project - keep on geekin'!
Your WolfiG


Thursday, June 9, 2016

Error Checking out Maven Project in Eclipse Mars

Background

I am working in Eclpise (Mars.2 Release (4.5.2)) in an team environment using SVN and Maven. I am new to this environment and I am stumbling every now an then over errors.
One of the first appeared when I tried to check out one of our team's projects from the SVN server by using:

  • "File" -> "New" -> "Other..." -> "Maven" -> "Check out Maven Projects from SCN", or
  • "SVN Repository Exploring" Perspective: Right click on Project -> "Check out as Maven Project..."
In both of the cases I got an error:

'Checking out Maven projects' has encountered a problem.
An internal error occurred during: "Checking out Maven projects"

Error details:

An internal error occurred during: "Checking out Maven projects".
org.eclipse.team.svn.core.operation.file.CheckoutAsOperation.<init>(Ljava/io/File;Lorg/eclipse/team/svn/core/resource/IRepositoryResource;IZZ)V

Solution 

A hint of my colleague solved this (cudos!):

  • Do NOT check out as Maven Project (e.g. in "File" dialog, BUT
  • use SVN Repository Exploring perspective and
  • then use normal "Check out" to check out your project, then
  • change to Java perspective,
  • then right click on your checked out project and
  • choose "Configure" -> "Convert to Maven Project"
Hope than helps - keep on geekin'
WolfiG

Wednesday, February 3, 2016

Creating Technical Illustrations using Blender

For my job at GSI (a nuclear research center in Germany) I am creating scientific/technical illustrations of machines and experiments. Here are some of my favorites which you can use as inspiration. Please note that the images here are copyrighted material and may not be used without explicit approval.

A look into the cave of the "Cryring" ion storage ring - the machine I am working on:

This illustration was created combining Blender files created from converted CATIA models. Thse models were converted to a Blender compatible format using the technique described in another blog post of mine. The two persons in the illustration were made with the freeware program Makehuman.

A detail of the Cryring storage ring, the injection septum:



There I like the most the combination of "photorealistic" and line art graphic. How to create this effect is described in the post "Using Freestyle for Highlighting Objects". Similarly I used this technique for the illustration of the ESR gas target:


A detail view of the system of electrodes of the "ion bumper" inside of a vacuum chamber. This is part of the CRYRING ion storage ring. Similar to above I used freestyle NPR in combination with photorealistic rendering:


Illustration of GSI and FAIR accelerators represented by glowing tubes:


Here I used the "IOL" (ion optical layout) exported from CATIA via AutoCAD dxf to create the polylines/glowing tubes. The IOL is a set of points in 3D space defining the positions of beam optical elements (magnetic dipoles, quadrupoles, higher N-poles, etc) and their connection.
Furthermore I used the SAPLINGS addon to Blender to create the trees. The replication of the initial set of trees was done exploiting the hair particle system.

A less artistic  but very technical aspect of using Blender as illustration tool can be achieved by creating illustrations from data stored in a database (related article). I used this technique to create an illustration of cable routes along the accelerator:

Wednesday, December 9, 2015

DataBase Driven Blender: Creating Objects in Blender from DB Data

Introduction

Currently I am working in a project team installing a particle accelerator. Like with every other complex machinery this involves a large amount of cabling works to connect all kinds of devices with all kinds of cables and cable types. These cable types can roughly be grouped according to their function: power cables, Ethernet cables, interlock cables, etc. Quite often these different groups of cables are laid by different teams or even different companies along the same cable paths in a short period of time. Hence, it is crucial that these teams work coordinately and according to a transparen plan so they do not interfere with each other during their works.
As I am a big fan of 3D graphics I wanted to create 3D visualizations of the cable paths assigned to the various cable types and because I am a big fan of Blender, I wanted to create the required images there. The output of my efforts look like this:


What you see here is a look into the "cave" (a room created by large concrete blocks shielding radiation) where the accelerator will be installed. The 3D-Model data of the concrete blocks and the cable support structures is coming from CATIA data provided by our mechanical engineering deprtments - another article of mine describes the export from CATIA and import into Blender. The grating-like structures carried by orange support structures are cable trays where the cables are to be laid, the colored tube structures are the cable paths for the different able groups. The dimension along the horizontal axis is roughly 20m.

Environment

Initially, the data on cable trays and routes came from 2D Autocad drawings provided by engineers. Based on these drawings I created a MS SQL Server database model holding data on cable routes, including the x,y,z node coordinates of the cable paths and the definition of path segments which can be added to a end-to-end cable path from start device to an end device.
The database model has the following structure:


This database structure can be accessed via a view combining the data:

SELECT        dbo.CableRoutePathIDs.RoutePathID, dbo.CableRoutePaths.SegmentSortOrder, dbo.CableRoutePaths.RouteSegmentID, dbo.CableRoutePathIDs.RoutePathUser,                        dbo.CableRoutePathIDs.Description, dbo.CableRouteSegments.Start_NodeID,
                         dbo.CableRouteNodes.GsiQuadrantX * 720 + dbo.CableRouteNodes.DistToGsiQuadrantX_cm AS StartX,
                         dbo.CableRouteNodes.GsiQuadrantY * 720 + dbo.CableRouteNodes.DistToGsiQuadrantY_cm AS StartY, dbo.CableRouteNodes.HeightOverFloor_cm AS StartZ,
                         dbo.CableRouteSegments.End_NodeID, CableRouteNodes_1.GsiQuadrantX * 720 + CableRouteNodes_1.DistToGsiQuadrantX_cm AS EndX,
                         CableRouteNodes_1.GsiQuadrantY * 720 + CableRouteNodes_1.DistToGsiQuadrantY_cm AS EndY, CableRouteNodes_1.HeightOverFloor_cm AS EndZ,
                         dbo.CableRouteNodes.GSICoordX_m AS StartX2, dbo.CableRouteNodes.GSICoordY_m AS StartY2, CableRouteNodes_1.GSICoordX_m AS EndX2,
                         CableRouteNodes_1.GSICoordY_m AS EndY2
FROM            dbo.CableRouteNodes INNER JOIN
                         dbo.CableRouteSegments ON dbo.CableRouteNodes.NodeID = dbo.CableRouteSegments.Start_NodeID INNER JOIN
                         dbo.CableRouteNodes AS CableRouteNodes_1 ON dbo.CableRouteSegments.End_NodeID = CableRouteNodes_1.NodeID RIGHT OUTER JOIN
                         dbo.CableRoutePathIDs INNER JOIN
                         dbo.CableRoutePaths ON dbo.CableRoutePathIDs.RoutePathID = dbo.CableRoutePaths.RoutePathID ON
                         dbo.CableRouteSegments.PathSegmentID = dbo.CableRoutePaths.RouteSegmentID
Last but not least I needed a python script accessing the database and creating the tube-shaped objects along the paths given by the definitions of the cable paths. The view provides the following data:

  • [0] RoutePathID: the identifier of the cable route path (1 per cable group)
  • [1] SegmentSortOrder: the cable follows the path along a certain chain of path segments
  • [2] RouteSegmentID: identifier of current path segment
  • [3] RoutePathUser: as described above, each of the path is assigned to different cable groups which are under the supervision of a team
  • [4] Description: long text description of the path segment
  • [5] StartNodeID: Start node of path segment
  • [6,7;10,11] Start/End GSIQuadrantX/Y: the accelerator facility area is segmented in to squares of 720 x 720 cm providing a coordinate system with specific 0-point
  • [9] EndNodeID: End node of path segment
  • [8;12] Start/End HeightOverFloor_cm: z-coordinate
  • [13-16]Start/End  GSICoordX/Y_m: absolute distance to 0-point in x-direction


Blender Script

One challenge I had to overcome during the development of the script was the access to the python module PYODBC which provides an easy to use API to access databases, among which the MS SQL Server. I documented the solution I found - and which works pretty well in my case - in another blog post.
The Blender script implementation looks as follows:
import sys
pyodbcPath = 'E:\\Progs\\Anaconda\\envs\\python342\\Lib\\site-packages'
systemPaths = sys.path
checkResult = [s for s in systemPaths if pyodbcPath in s]
#check if path to pyodbc exists in sys and add if needed
if (checkResult == []):
    sys.path.append(pyodbcPath)
   
import pyodbc
import bpy
from mathutils import Vector
import bmesh
def createBevelObject(parentID):
    bpy.ops.curve.primitive_nurbs_circle_add(radius=0.05,layers=(False,False,False,False,False,False,False,False,False,False,False,False,False,False,False,False,False,False,False,True))
    bevelObj = bpy.context.selected_objects[0]
    return (bevelObj)
dbConnection = pyodbc.connect('DRIVER={SQL Server};SERVER=<DBSERVERNAME>;DATABASE=>DBNAME>;UID=<DBUSERNAME>;PWD=<DBPASSWORD>')
dbCursor = dbConnection.cursor()
queryStatement = "SELECT * FROM dbo.CablesPathsDefinitionsWithOwners ORDER BY RoutePathID, SegmentSortOrder"
splineCounter = 0
oldPathID = ""
vertexList = []
scaleFactor = 100.0
outText = ""
bevelObject = createBevelObject(oldPathID)
for dbRow in dbCursor.execute(queryStatement):
    currentPathID = dbRow[0]
    if (oldPathID != currentPathID):
        if (len(vertexList) != 0): #vertexList must not be empty
            # Finalize curve creation with given list of vertices
            bpy.context.scene.objects.link(objectData)
            polyline = curveData.splines.new('POLY')
            polyline.points.add(len(vertexList)-1)
            for vertexCounter in range(len(vertexList)):
                x,y,z = vertexList[vertexCounter]
                outText = outText + oldPathID + ";" + str(vertexCounter) + ";x:" + str(x) + ";y:" + str(y) + ";z:" + str(z) + ";" + "\n"
                polyline.points[vertexCounter].co = (x,y,z,1)          
        vertexList = []
        if (dbRow[6] != None and dbRow[7] != None and dbRow[8] != None and dbRow[10] != None and dbRow[11] != None and dbRow[12] != None):
            # if oldPath != newPath create new curve object and fill with vertices afterwards
            # Prepare curve creation
            curveData = bpy.data.curves.new(currentPathID, type='CURVE')
            curveData.dimensions = '3D'          
            #curveData.bevel_depth = 1 / scaleFactor
            # Create new curve
            objectData = bpy.data.objects.new(currentPathID, curveData)
            curveData.bevel_object = bevelObject
            objectData.location = (0,0,0)
            oldPathID = currentPathID
            # add coordinates to vertexList
            if (dbRow[13] != None and dbRow[14] != None):
                x = float(dbRow[13])
                y = float(dbRow[14])        
            else:  
                x = float(dbRow[6]) / scaleFactor
                y = float(dbRow[7]) / scaleFactor
            z = float(dbRow[8]) / scaleFactor
            x = round(x,2)
            y = round(y,2)
            z = round(z,2)
            vertexList.append(Vector((x,y,z)))
            if (dbRow[16] != None and dbRow[15] != None):
                x = float(dbRow[15])
                y = float(dbRow[16])              
            else:
                x = float(dbRow[10]) / scaleFactor
                y = float(dbRow[11]) / scaleFactor
            z = float(dbRow[12]) / scaleFactor
            x = round(x,2)
            y = round(y,2)
            z = round(z,2)          
            vertexList.append(Vector((x,y,z)))
    else:
        if (dbRow[6] != None and dbRow[7] != None and dbRow[8] != None and dbRow[10] != None and dbRow[11] != None and dbRow[12] != None):
            # add coordinates to vertexList - coordinates are not neccesarily well aligned in polyline
            # First set of coordinates from DB
            if (dbRow[13] != None and dbRow[14] != None):
                x = float(dbRow[13])
                y = float(dbRow[14])        
            else:  
                x = float(dbRow[6]) / scaleFactor
                y = float(dbRow[7]) / scaleFactor
            z = float(dbRow[8]) / scaleFactor
            x = round(x,2)
            y = round(y,2)
            z = round(z,2)
            theVector = Vector((x,y,z))
            if (theVector not in vertexList): # check if entry already exists in list
                vertexList.append(theVector)
            # Second set of coordinates from DB
            if (dbRow[16] != None and dbRow[15] != None):
                x = float(dbRow[15])
                y = float(dbRow[16])              
            else:
                x = float(dbRow[10]) / scaleFactor
                y = float(dbRow[11]) / scaleFactor
            z = float(dbRow[12]) / scaleFactor
            x = round(x,2)
            y = round(y,2)
            z = round(z,2)
            theVector = Vector((x,y,z))
            if (theVector not in vertexList):
                vertexList.append(theVector)
dbConnection.close()
This is more or less it. Despite the fact that the xyz-coordinate data needs to be very correct and aligned with the other 3D model data coming from CATIA (following the s**t-in s**t-out paradigm), the script works pretty well and creates the desired bundles of Bezier curves in blender beveled with corresponding objects.

I hope you this post helps you with your own project, keep on geekin'!
Your WolfiG

Thursday, November 26, 2015

Calling 3rd Party Python Modules in Blender Scripts

Introduction

This costed me nerves....
Yesterday I had the idea of connecting a Blender 2.76b via a Blender plugin - which still needs to be written... - to a SQL Server database installation where I keep geometric data of some objects. Furthermore I am working in a windows environment using a Anaconda python installation, which I normally run with Python 2.7.6.
The python weapon of choice to connect to a database is the module PYODBC which offers convenient functions to access databases. Unfortunately, this module is not available in Blender's bundled Python installation. I did not manage to install pyodbc within the context of Blender's bundled python

So this is where my odysee started. I had the following challenges to solve:
  • Create a Python environment compatible woth Blender 2.76b
  • Turn Blender's python environment to the environment with pydbc

 Creating an Independent Python Environment

As I said I am using the Anaconda IDE. My first approach was to install plain Python 3.4.2 (required for Blender 2.76) from python.org in parallel to Anaconda. Installation and stuff wen fine, but quickly I ran into the issue that I wanted to install the pyodbc package on top of the standard installation and this was not possible as this installation requires a working VS2013 (?) environment to compile the sources. This is where I went back to Anaconda.
In Anaconda it is possible to create version dependent python environments in parallel to the intitial installation using the command
  • conda create -n <environment name> python=<version number>
The new environment can be found at the path:
  • ..\<Anaconda root path>\envs\<environment name>

To activate the environment call in the command line:
  • activate <environment name>
So whatever you have to change to adjust your Python installation for Blender needs to be done in the environment with the right version number for Blender. The following steps assume that you activated an environment with this right version.
In Anaconda modules like pyodbc can be installed very easily without the need for external tools and compilers. This is done by calling (in this example I am installing pyodbc):
  • conda install pyodbc
The conda command  fetches all the required sources from the internet, compiles them if needed and adjusts the environment so they can be used.
3rd party modules (like pyodbc) are stored at a path
  • ..\<Anaconda root path>\envs\<environment name>\Lib\site-packages
This is where Blender needs to look for the 3rd party module.

Telling Blender where to Find Things

I tried a couple of things to tell Blender where to look for pyodbc, including setting the PYTHONPATH variable following this blog post, deleting Blender's Python, etc. However, all of these approaches were very cumbersome and led to negative interferences between Blender and Anaconda - meaning either of these did not work.
Finally I found a solution which seems to me quite elegant: I don't do any system-related settings like PYTHONPATH and stuff and do everything need in my script.
I simple check if the path to pyodbc (or any other 3rd party module) is already included in the sys.path varialble of Python. If this is not the case I append the required path and import the module after having registered the path:

import sys

pyodbcPath = '..\\<Anaconda root path>\\envs\\<environment name>\\Lib\\site-packages'
systemPaths = sys.path

checkResult = [s for s in systemPaths if pyodbcPath in s]

#check if path to pyodbc exists in sys and add if needed
if (checkResult == []):
    sys.path.append(pyodbcPath)
   
import pyodbc
import bpy

[...] Your code comes here

This is it.

Thanks for reading my post, I hope you enjoyed it and it helps you with your own project.
Keep on geekin'! Your
WolfiG


Wednesday, November 25, 2015

Building Data Driven Applications: Combining Vis.js with Paper.js

Introduction

Currently I am working on a visualization project where I want display real world entities with the help of a network graph. Additional to the classical network graph and graphics I want to display information like groups of network nodes, or I want to highlight groups of nodes based on node attributes. Here an example of combined information:
The network graph displays the connection of "things"/entities and the colored areas show groups of these entities.
All the data I am using is stored in a database, so I was looking for a way to visualize these data with the least effort and media breaks. Furthermore, one factor for the choice of tools was the possibility to deploy visualized data with the least possible effort and in best case as view on "real time" data in the database.

Frameworks for Network Graphs

To achieve my task I evaluated a couple of tools and frameworks. My first choice was GEPHI, which is available in version 0.8.2. GEPHI provides means to layout quite large numbers of nodes and edges into nicely looking network graphs and it is quite popular among scientists and practitioners using networks. However, I found a couple of drawbacks which made me to leave the. GEPHI path: first of all GEPHI is a closed, standalone program. Hence, it is hard to enhance a readily layouted graph with additional information and this requires additional external tools. Furthermore it is not possible to provide interactive, real time graphs which can be presented to an end user. The GEPHI workflow requires the following steps: draw data from database (GEPHI has a direct DB interface), layouting of the network with some algorythm, export visualization to a format which allows post processing - e.g. SVG - and do the actual post processing.
Finaly I decided to abandon the GEPHI path as it involved too many steps with human intervention and turned to web-based tools. Quite quickly I found javascript frameworks as my weapon of choice and evaluated the following frameworks for network layouting: D3.js, cytoscape.js,  sigma.js and vis.js. To keep things short, I ended up with Vis.js, mostly because it was the only framework which allowed me to pin down nodes via configuration file (a JSON I export from database) and because it provides a "force atlas 2"-like layouting algorithm which yields networks layouts I was aiming for.

Frameworks for HTML5 Graphics

For the "additional" graphics frameworks I ended up quickly with the "big three": Paper.js, Processing.js and Raphael.js. I started with Raphael.js which I knew from a project I was involved before, but it quickly turned out that Raphael's SVG cannot or is hard to combine with vis.js as the latter one uses the HTML5 canvas. I didn't get warm with Processing.js as it uses its own scripting language, so finally I ended up with Paper.js, which for my taste yields satisfactory results.

Vis.js' Door to other Frameworks

The first steps of combining vis.js and paper.js were not easy. I managed to overcome the first obstacles with the help of the vis.js team (thanks for that). Based on one of the vis.js network examples I was able to hook Paper.js into Vis.js.

As the first step you have to tell Paper.js to paint on the Vis.js canvas. This can be done in the main program by the following lines:
[...] // Things required to setup network, etc.
// "mynetwork" is defined in html <body>
var container = document.getElementById('mynetwork');
var network = new vis.Network(container, data, options);
[...] // More network stuff here if needed
paperCanvas = container.firstChild.canvas;
paper.setup(paperCanvas);

The main entry points to inject other graphics into the Vis .js canvas are the Vis.js network events
  • network.on("beforeDrawing", function(ctx) { [...] }) this will draw graphics behind the network graph
  • network.on("afterDrawing", function(ctx) { [...] }) this will draw in front of th network graph
With these hook events one can start to draw Paper.js elements and display them together with the network graph. You can put any Paper.js related code replacing the square brackets. I use Vis.js Network/DataSet methods to extract geometric data on positions, etc. and pass them to Paper.js to draw elements related to the network nodes, etc.

I hope this posts helps you with your own project. Keep on geekin', your
WolfiG

Tuesday, October 13, 2015

How to Create and Deploy Custom Ribbon Entries in MS Office (including custom icons) and avoid some Pitfalls

Background

For a project in our company I wanted to create a Excel-macro-based user interface for some SQL Server-based database. The macro VBA-programming went quite well (due to some experience I already collected in this field), but I wanted to provide two buttons in the Excel 2010 ribbon to give the user the possibility to trigger macros to read/write from/to the database.
Many hints are available in the web, but I did not com across a receipe which provides the steps to create custom UIs in a concise form. Thus, and as I went into some traps, I want to share my experiences in this post.

Challenge

As described above, I wanted to provide two end-user buttons in the Excel ribbon as custom UI elements. To achieve this goal I had to problems to solve:
  • How to get the buttons into the ribbon?
  • How to deploy these custom buttons with my Excel in the most user-friendly way?

Custom Elements via Excel Options

Excel itself provides the means to create custom ribbon elements. There is a number of articles out there describing how to do this - for instance "How to Add Your Own Macros to Excel 2010" on changdoo.org. The process is quite simple and is easily achieved.
To deploy the customized ribbon to your customers, Excel offers the possibility to export the changed settings to an xml-file (called *.exportedUI) via the menu:
File --> Options --> "Customize Ribbon" menu entry --> "Import/Export" button:

As external reference see as well THIS microsoft article.

I do not have the administrator means to distribute these buttons centrally but the so created configuration file can be sent to those users who want to incorporate the new buttons into their work environment.
Nevertheless this process has some shortcomings:
  • The users have to follow a procedure to import the files with my buttons into their Excel environment 
  • One has to distribute a seperate file which has seperate versions
  • It is not possible to use customs icons for the buttons
And last but not least the export procedure creates xml-code with some traps, which become obvious if one investigates the *.exportedUI xml-code with some editor. The most critical to me is that the macro being called by "onAction" is referenced by the absolute path of the excel file where the export is made from. Here an example of a custom UI created by Excel Options:

Freshly exported to *.exportedUI, the code comes without formatting (no line breaks) and most important note the absolute path in the xml:

<mso:cmd app="Excel" dt="1" />
<mso:customUI xmlns:x2="http://schemas.microsoft.com/office/2009/07/customui/macro" xmlns:x1="TFCOfficeShim.Connect.14" xmlns:mso="http://schemas.microsoft.com/office/2009/07/customui">
<mso:ribbon>
<mso:qat>
<mso:sharedControls>
<mso:control idQ="mso:FileNewDefault" visible="false"/>
<mso:control idQ="mso:FileSave" visible="true" insertBeforeQ="mso:FileOpen"/>
<mso:control idQ="mso:FileSendAsAttachment" visible="false" insertBeforeQ="mso:FileOpen"/>
<mso:control idQ="mso:Spelling" visible="false" insertBeforeQ="mso:FileOpen"/>
<mso:control idQ="mso:Undo" visible="true" insertBeforeQ="mso:FileOpen"/>
<mso:control idQ="mso:Redo" visible="true" insertBeforeQ="mso:FileOpen"/>
<mso:control idQ="mso:SortAscendingExcel" visible="false" insertBeforeQ="mso:FileOpen"/>
<mso:control idQ="mso:SortDescendingExcel" visible="false" insertBeforeQ="mso:FileOpen"/>
<mso:control idQ="mso:FileOpenRecentFile" visible="false" insertBeforeQ="mso:FileOpen"/>
<mso:control idQ="mso:FileNew" visible="true" insertBeforeQ="mso:FileOpen"/>
<mso:control idQ="mso:FileOpen" visible="true"/>
<mso:control idQ="mso:PrintPreviewAndPrint" visible="true" insertBeforeQ="mso:FilePrintQuick"/>
<mso:control idQ="mso:FileSaveAsPdfOrXps" visible="true" insertBeforeQ="mso:FilePrintQuick"/>
<mso:control idQ="mso:FilePrintQuick" visible="true"/>
<mso:control idQ="mso:FileClose" visible="true"/>
</mso:sharedControls>
</mso:qat>
<mso:tabs>
<mso:tab id="mso_c4.1568D5E4" label="My New Tab" insertAfterQ="x1:IDC_TEAM_TAB">
<mso:group id="mso_c5.1568D5E4" label="My New Group" autoScale="true">
<mso:button idQ="myButtonID" label="Call My Macro" imageMso="ListMacros" onAction="Q:\FolderName\MyMacroExcel.xlsm!MyMacro" visible="true"/>
</mso:group>
</mso:tab>
</mso:tabs>

</mso:ribbon>
</mso:customUI>
PITFALL: If this is deployed to other users, the referenced macro will not be found if the Excel file is located in another location. To get around this trap you have to modify the code in *.exportedUI manually and remove the absolute path to the macro.

Custom Elements via Direct Ribbon Customization

With the above said, I was not very content with the "Excel Options"-option. After some searching I came across the "Custom UI Editor" option, which utilizes an external tool avaiable for example HERE.
With this little program one can modify the UI resources of an MS Office file directly without the need to export the changes to some external file. The UI is quite simple and offers the possibility to upload custom icons as images (any standard image format) and enter custom ribbon code directly in the Excel file:

 The resulting ribbon customization is stored inside the Excel file and is hence deployed directly with the file. The so created new ribbon entry is not visible in the "Options" dialog above.

The following code worked in MS Word 2010 and Excel 2010:

<customUI xmlns="http://schemas.microsoft.com/office/2009/07/customui">
<ribbon>
<tabs>
<tab id="myTab" label="MYLABEL">
<group id="testCaseTool" label="My Text">
<button id="ID1" label="Text 1" size="large" image="iconBitmap1" onAction="myMacro1" visible="true" />
<button id="ID2" label="Text 2" size="large" image="iconBitmap2" onAction="myMacro2" visible="true" />
</group>
</tab>
</tabs>
</ribbon>
</customUI>
It will create a new entry in to ribbon with two buttons calling two different macros. Add your icons and you're done!

PITFALL: Very critical is the fact that the macros referenced in the must have a special input parameter as described in the forum post "VBA error 'Wrong number of arguments or invalid property assignments' when running macro via custom button".
If one references routines created as VBA "Sub subname()", you will run into an error "Wrong number of arguments...". Hence you will have to modify your VBA code accordingly and create a new VBA sub with the right input parameter "Sub newsubname(control As IRibbonControl)".

This is how I was able to produce a satisfactory custom ribbon entry which I could deploy to my colleagues without any further hassle.

I hope you can get something out of this post for your own project - keep on geekin'

WolfiG