Friday, May 30, 2008

Installing boost::python on Leopard

This is a memo when I installed boost::python on my iMac. For some reason I couldn't make the boost::python regression tests passed but boost itself seems to be installed fine.

1) Edit boost_1_35_0/tools/build/v2/tools/darwin.jam. insert semicolon between
-Wl,-dead_strip and -no_dead_strip_inits_and_terms

2) Made a dummy user-config.jam in my home dir (or root), which is

using darwin
;

3) build boost with
bjam -toolset=darwin python

4) Made a symbolic link
ln -s boost-1_35/boost .

5) Got an example source code from http://www.kmonos.net/alang/boost/classes/python.html

6) Built extension with
gcc my_sample.cpp -L/usr/local/lib -dylib /usr/local/lib/libboost_python-mt-1_35.dylib -I/usr/local/include -framework Python -I/Developer/SDKs/MacOSX10.5.sdk/System/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/ -lstdc++ -dynamiclib -o my_sample.so

I don't know if 1) is necessary. Doing so was just natural thinking that they are both linker options, and without 2) bjam tried to execute gcc with weird options when linking libboost_python-mt-1_35.dylib, and I still don't know if I can use bjam to build an extension.

Thursday, May 29, 2008

Subclassing or parenting? (Shake)

When you reuse some class and make a new class in c++, you have two ways: making a subclass or having an instance of the class in your new class. The latter is considered to be the better one in general, unless there's conceptual "is-a" relationship.

I realized I also have the same two options when I am making a new Shake node from an existing one. Subclassing, or having an instance of the existing node as a child in my node. And this time I need to choose one of them in more practical way.

When I subclass a node, everything, UI, serialization, ... is o.k. but you need to make a creator function from the scratch, (see my previous post). I need to be careful not too miss anything, e.g. notify(), it's really time consuming because I need to observe the existing node closely.

When I use parenting, making a creator function will be probably easier if the number of arguments are fixed, but I need to connect plugs, need to implement some to have on screen control work as well with the new node, etc. And I see no way of calling a creator function inside my creator function if it accepts variable number of arguments. Maybe C language limitation? Now I'm stuck here...

I wish I had Shake source code, then subclassing would be really easy.


Jun 11,
After all I concluded I could do neither subclassing nor parenting. For a simple node like Move2D, I can guess exactly what the creator function does, and I can recreate it. But what I tried to use was much more complicated node (namely MultiPlane). I can observe its behavior closely and imitate the standard behavior, but how can I be sure that my creator function does exactly the same as what the standard one does? So I cannot make subclassing. Then my option left is parenting, but I cannot use it as well because parenting means I create MultiPlane object somewhere inside the creator function for my custom node. MultiPlane creator function has variable number of arguments so my creator function also needs variable number of arguments to pass them to the internal MultiPlane's creator function. How can I do that? 's printf() has a sibling vfprintf() that takes va_list but Multiplane creator function doesn't. It's not a good idea to use assembler here. Actually I came up with another idea. First I can make a standard multiplane and customize inside node structure after. But MultiPlane is MultiPlane and when it's saved and loaded, MultiPlane's creator function gets called. There's no place to hold additional parameters, sigh.

Making a patch for boost.python (accepted)

I made a patch for boost::python and sent it to the boost developers.
Currently with boost 1.35.0, you can write a code like this to access to an attribute in an object. A:


object attrobject = obj.attr("position");

With my patch, you will also be able to write the code like this. B:

object attrnameobject("position");
object attrobject = obj.attr(attrnameobject);

Internally, A does roughly three things.
A-1) Create a Python string object which has "position".
A-2) Access to an object's attribute using the string object.
A-3) Delete the string object
So if you execute A many times, each time you create a string object and delete it. If there are lots of objects and you need to access to "position" attribute of all of them, it's a waste of time to create and delete a string object "position" each time.

If you use B, you create a string object once by
object attrnameobject("position");
and you can reuse it to access to "position" attribute of every object by

for(everyobject)
{
object attrobject = obj.attr(attrnameobject);
}

and it's more efficient.
According to my test, it could cut off about 30% of time to set value and 45% to get value.


it's been accepted and committed to the boost svn trunk (revision 45918), so you will be able to use this feature at the next boost release(1.36.0).

Wednesday, May 28, 2008

A tool to analyze Shake tree evaluation

I made a dummy node to log notify and eval plugs.
Every time notify() or eval() is called, plug's full name and new value is logged on the console.
When you connect a node like this,














Something like this is displayed on the Shake console.
It should be useful to analyze a tree.
























#include <nriiplug.h>
#include <nrifx.h>

class NRiFx_Linkage logNode : public NRiMonadic {
public:
logNode();
virtual ~logNode(){}
virtual int eval(NRiPlug *p);
virtual int notify(NRiPlug *p);
NRiDeclareNodeName(logNode);
protected:
void passData_(NRiPlug *from, NRiPlug* to);
void log_(NRiPlug* p, NRiName msg);
};

const NRiName logNode::thisClassName = "logNode";

logNode::logNode() : NRiMonadic()
{
in->setNotify(1, 1);
out->setNotify(1, 1);

in->time()->addDependency(out->time());
in->enable()->addDependency(out->enable());
in->roi()->addDependency(out->roi());
in->mask()->addDependency(out->mask());
in->iBuf()->addDependency(out->iBuf());
in->cacheLevel()->addDependency(out->cacheLevel());

out->width()->addDependency(in->width());
out->height()->addDependency(in->height());
out->bytes()->addDependency(in->bytes());
out->active()->addDependency(in->active());
out->oBuf()->addDependency(in->oBuf());
out->dod()->addDependency(in->dod());
out->bPixel()->addDependency(in->bPixel());
out->cacheId()->addDependency(in->cacheId());
out->bData()->addDependency(in->bData());
out->timeRange()->addDependency(in->timeRange());
}

int logNode::eval(NRiPlug *p)
{
NRiPlug* parent = p->getParent();
NRiPlug* otherParent = (parent == in)? out : in;
NRiPlug* otherPlug = otherParent->getChild(p->getName());
passData_(otherPlug, p);
log_(p, "eval");
return NRiMonadic::eval(p);
}

int logNode::notify(NRiPlug *p)
{
log_(p, "ntfy");
return NRiMonadic::notify(p);
}

void logNode::log_(NRiPlug* p, NRiName msg)
{
NRiName buf;
NRiSys::error((msg + ": " + p->getFullPathName() + " value=").getString());
switch(p->getType())
{
case kString:
NRiSys::error(p->asString().getString());
break;
case kInt:
NRiSys::error(NRiName(p->asInt()).getString());
break;
case kFloat:
NRiSys::error(NRiName(p->asFloat()).getString());
break;
case kDouble:
NRiSys::error(NRiName(p->asDouble()).getString());
break;
case kPtr:
buf.sprintf("0x%x", p->asPtr());
NRiSys::error(buf.getString());
break;
default:
break;
}
NRiSys::error("\n");
}

void logNode::passData_(NRiPlug *from, NRiPlug* to)
{

switch(from->getType())
{
case kString:
to->set(from->asString());
break;
case kInt:
to->set(from->asInt());
break;
case kFloat:
to->set(from->asFloat());
break;
case kDouble:
to->set(from->asDouble());
break;
case kPtr:
to->set(from->asPtr());
break;
default:
break;
}
}

extern "C"
{
NRiExport NRiIPlug *LogNode_(NRiIPlug *img)
{
logNode* fx = new logNode;
fx->in->connect(img);
fx->setParent(NRiNode::getRoot());
return fx->out;
}
}

Saturday, May 24, 2008

Yet another texPPattrMapperNode example(s)

You can control particle movement with mesh face normals(you can also use vertex normals).
Velocity direction of a particle passing through a mesh is exactly the same as the mesh's face normal at that point. If a particle is in-between two meshes, velocity direction is calculated by interpolating two face normals.



Thursday, May 22, 2008

Shake SDK DVD review

I watched just the first volume (and a bit of others). I thought I would not write a review until I watch all the DVDs but I'll write it now while my memory is still fresh.

First of all, it was quite interesting. I compared it with Maya API and there are lots of similarities (I heard those who made Maya made Shake), when I was watching lazy evaluation I felt as if I was watching Maya DVD. Though there are lots of differences of course, the biggest difference of which will probably be, data transmission with a connection is bidirectional. When there is a connection between node A to B, and the direction is A -> B, in Maya data goes always from A to B. In Shake, it can be A to B, or B to A, or both. It's necessary to save calculation time but it also makes making a node more complicated.

The concept of Shake script is also interesting. I don't write the detail because the advantage is quite obvious. It must have been a difficult (but fun) work to represent a node tree in the form of c language. I just wonder if it was real c language (gcc or such), then I could link it to other c(c++) programs directly. I told the idea to a guy in apple. He understood me but pointed out it would make the script too difficult. There is another small thing, though it's a c like language, it's not a c language, I would have been happier if the extension was not .h I think I'll be confused when I'm making plug-ins if some file is a c header or a script.

Node having only one output comes from tree representation in a script but I like the restriction. If there can be multiple output, the tree will be too difficult to handle by the user. That's Maya's way and I understand the strength of it but Maya DG is too difficult to analyze. Shake node network (and Houdini's) are more fun to play with. Bidirectional connection is a nice idea! I believed in a visual language what you see must be what it behaves but I will have to change my mind.

Ah and an image. I think if it had an ability to have blind data, like Houdini.


By the way there was one thing that took me very long time to understand.


nuiToolBoxItem(”@DisplayName1”, OneInputPlugin(0));

It really confused me. why do I need to create a node and pass the result image as a second parameter? I thought every possibilities such as instantiating a node by cloning, until I got the simple fact that in shake script you don't have to include a string constant with "".

Creating a custom shake node deriving from an existing one

I just found you can make a custom shake node easily deriving from an existing shake node.


#include <NRiIPlug.h>
#include <NRiMove2D.h>

class NRiFx_Linkage TmpTest : public NRiMove2D {
public:
virtual int eval(NRiPlug *p)
{
NRiSys::error("TmpTest eval() called.\n"); //Just prints out something in the console.
return NRiMove2D::eval(p);
}
virtual ~TmpTest(){}
NRiDeclareNodeName(TmpTest);
};

const NRiName TmpTest::thisClassName = "TmpTest";

You may also want to use Move2D's creator function but you can't because a creator function usually creates a node and set default parameters at the same time. Instead you'll need to set parameter values directly.

extern "C"
{
NRiExport NRiIPlug *TmpTest_(
NRiIPlug *img,
const char *xPan,
const char *yPan,
const char *angle,
const char *aspectRatio,
const char *xScale,
const char *yScale,
const char *xShear,
const char *yShear,
const char *xCenter,
const char *yCenter,
const char *xFilter,
const char *yFilter,
const char *transformOrder,
const char *invertTransform,
const char *motionBlur,
const char *shutterTiming,
const char *shutterOffset,
const char *useReference,
const char *referenceFrame)
{
TmpTest * fx = new TmpTest;
fx->setParent(NRiNode::getRoot());
fx->in->connect(img);
fx->pXPan()->set(xPan);
fx->pYPan()->set(yPan);
fx->pAngle()->set(angle);
fx->pAspect()->set(aspectRatio);
fx->pXScale()->set(xScale);
fx->pYScale()->set(yScale);
fx->pXShear()->set(xShear);
fx->pYShear()->set(yShear);
fx->pXCenter()->set(xCenter);
fx->pYCenter()->set(yCenter);
fx->pXFilter()->set(xFilter);
fx->pYFilter()->set(yFilter);
fx->pTOrder()->set(transformOrder);
fx->pTReverse()->set(invertTransform);
fx->pMotionBlur()->set(motionBlur);
fx->pShutterTiming()->set(shutterTiming);
fx->pShutterOffset()->set(shutterOffset);
fx->pUseReference()->set(useReference);
fx->pReferenceFrame()->set(referenceFrame);
return fx->out;
}
}

And register it to the shake compiler (Part of the string to be passed to the cmplr).

"extern image TmpTest_(image,\n"
" float xPan = 0 , float yPan = 0,\n"
" float angle = 0, float aspectRatio = GetDefaultAspect(),\n"
" float xScale = 1, float yScale = xScale,\n"
" float xShear = 0, float yShear = 0,\n"
" float xCenter = width/2, float yCenter = height/2,\n"
" const char *xFilter = \"default\", const char *yFilter = xFilter,\n"
" const char *transformOrder = \"trsx\",\n"
" int invertTransform = 0,\n"
" float motionBlur = 0, float shutterTiming = 0.5, float shutterOffset = 0,\n"
" int useReference = 0, float referenceFrame = time\n"
" );\n"

Shake uses C++ in a straightforward way which brings this flexibility.
I'm quite impressed. Cool, cool

Tuesday, May 20, 2008

Automatic array data conversion

You can convert array type like this.
(a and b can be any of MVectorArray, MFloatVectorArray, MPointArray, MFloatPointArray)


MFloatVectorArray a;
//Fill a with values here.
MPointArray b = DataTypeConverter::GenericVectorArray(a);

with this.

#ifndef DataTypeConverter_H
#define DataTypeConverter_H

#include <maya/MVector.h>
#include <maya/MFloatVector.h>
#include <maya/MPoint.h>
#include <maya/MFloatPoint.h>
#include <maya/MVectorArray.h>
#include <maya/MFloatVectorArray.h>
#include <maya/MPointArray.h>
#include <maya/MFloatPointArray.h>
#include <vector>

namespace DataTypeConverter
{

class GenericValue
{
protected:
double x_;
double y_;
double z_;
double w_;

public:
GenericValue(const MPoint& source) : x_(source.x), y_(source.y), z_(source.z), w_(source.w){}
GenericValue(const MFloatPoint& source) : x_(source.x), y_(source.y), z_(source.z), w_(source.w){}
GenericValue(const MVector& source) : x_(source.x), y_(source.y), z_(source.z), w_(1){}
GenericValue(const MFloatVector& source) : x_(source.x), y_(source.y), z_(source.z), w_(1){}
GenericValue(const float source) : x_(source), y_(0), z_(0), w_(0){}
GenericValue(const double source) : x_(source), y_(0), z_(0), w_(0){}

operator MPoint(){return MPoint(x_, y_, z_, w_);}
operator MFloatPoint(){return MFloatPoint((float)x_, (float)y_, (float)z_, (float)w_);}
operator MVector(){return MVector(x_, y_, z_);}
operator MFloatVector(){return MFloatVector((float)x_, (float)y_, (float)z_);}
operator float(){return (float)x_;}
operator double(){return x_;}
};


class GenericValueArray
{
protected:
std::vector < GenericValue > garray_;

public:
template < typename arrayT >
GenericValueArray(arrayT& sourcearray);

template < typename arrayT >
operator arrayT();
};


//-----------------------------------------------------------------------------
//-----------------------------------------------------------------------------
template < typename arrayT >
GenericValueArray::GenericValueArray(arrayT& sourcearray)
{
unsigned length = sourcearray.length();
for (unsigned i = 0; i < length; ++i)
{
garray_.push_back(sourcearray[i]);
}
}


//-----------------------------------------------------------------------------
//-----------------------------------------------------------------------------
template < typename arrayT >
GenericValueArray::operator arrayT()
{
arrayT destarray;
unsigned length = garray_.size();
destarray.setLength(length);
for (unsigned i = 0; i < length; ++i)
{
destarray[i] = garray_[i];
}
return destarray;
}

};
#endif

Monday, May 12, 2008

texPPattrMapperNode

I made a Maya utility plug-in that maps normal and texture color of meshes to a particle.



This is a node plug-in that takes particle fieldData, ppFieldData,
texture(s), and mesh(es). It then calculates particle velocities,
gets closest intersection point on a mesh, outputs face/vertex normal, texture color, position etc. at the point.

It can search two ways (forward/backward particle moving direction) and can linear interpolate output values depending on the length from the particle to the hit points on the mesh.

In the above movie, The planes are meshes and the yellow lines are vertex normals. I mapped texture color to the particle color, and mesh vertex normal to particle normal.

























Another example.


These are the attribute descriptions.
























Long name (short name)
Type
Default
Description
getDirection (gd) enum
"fixed direction"
Valid values are "fixed direction", "velocity", "cameraCenter". If this is "fixed", it uses "direction" attribute to determine forward direction. If "velocity" it particle velocity is regarded as forward direction, and if "cameraCenter", its forward direction is radial to the position given by the camera center attribute.
cameraMatrix
matrix

Center point that is used to calculate each pariticle's forward direction. It is designed to connect camera transform's worldMatrix but any matrix attribute can be connected. This value is ignored when getDirection is not "cameraCenter".
direction (dir) double3
(1.0, 0.0, 0.0)
Fixed direction value. Used only if getDirection is "fixed". Per particle field attribute sensitive.
asearchRadiusF
float
1.0E8
Forward search radius to detect intersection. Any hits beyond this distance will not be considered.
asearchRadiusB
float1.0E8Backward search radius to detect intersection. Any hits beyond this distance will not be considered.
searchCol (sc) enum
"both"
Direction to execute mesh hit test for color. Valid values are "both", "forward", "backward". If "both" is set, it interpolates "forward" and "backward" values.
searchNorm (sn) enum
"both"
Direction to execute mesh hit test for normal. Valid values are "both", "forward", "backward". If "both" is set, it interpolates "forward" and "backward" values.
weight (w) double
1.0
Weight value that is multiplied to the output colors. Output normals are not affected. Per particle field attribute sensitive.
offset (o) double 0.0
Offset value that is multiplied to the output colors. Output normals are not affected. Per particle field attribute sensitive.
minValue (min) double 0.0
Min value that is multiplied to the output colors. Output normals are not affected. Per particle field attribute sensitive.
maxValue (max) double 1.0
Max value that is multiplied to the output colors. Output normals are not affected. Per particle field attribute sensitive.
inMesh (im) mesh
(array)
N/A
Input attribute for mesh geometry. Connect mesh.worldMesh here.
inColor (int) float3 (array)
N/A Input attribute for texture. Connect texture.outColor here. It must be correspondent to inMesh, i.e. the number of element must be the same as inMesh
defaultColor (dc) float3 (0.0, 0.0, 0.0)
Default color value. Used when no hit is found. Per particle field attribute sensitive.
defaultNormal (dn) double3
(0.0, 1.0, 0.0) Default normal value. Used when no hit is found. Per particle field attribute sensitive.
normalType (nt) enum
"vertex normal" The way to calculate normal. Valid values are "vertex normal", "face normal", "average face normal".




inputData (ind) compound
N/A
Input attribute for particle. Connect particle.fieldData here.
inputPPData (ppda) genericArray
N/A Input attribute for particle. Connect particle.ppFieldData here.




hitPointF (hpf) vectorarray
N/A
Forward hit point on mesh.
hitPointB (hpb) vectorarray N/A Backward hit point on mesh.
distanceToMeshF (dtmf) doublearray N/A Length between the particle position and the forward hitpoint, or minus value if no hit is found.
distanceToMeshB (dtmf) doublearray N/A Length between the particle position and the backward hitpoint, or minus value if no hit is found.
hitMeshIdsF (hmif) intarray N/A Array index of the mesh in the input attribute "inMesh" that is in front of the particle.
hitMeshIdsB (hmib) intarray N/A Array index of the mesh in the input attribute "inMesh" that is in back of the particle.
hitPolygonsF (hpgf) intarray N/A Polygon id of the forward mesh the ray hits.
hitPolygonsB (hpgb) intarray N/A Polygon id of the backward mesh the ray hits.




colorF (clf) vectorarray N/A Texture color at the forward hit point. Min, max, offset, weight are not applied.
colorB (clb) vectorarray N/A Texture color at the backward hit point. Min, max, offset, weight are not applied.
normalF (nmf) vectorarray N/A Normal at the forward hit point.
normalB (nmb) vectorarray N/A Normal at the backward hit point.
outColor (oc) vectorarray N/A Output texture color. Min, max, offset, weight are applied and it may be interpolated.
outColorR (ocr) doublearray N/A R component of the outColor. Note that this is not a child attribute of outColor.
outColorG (ocg) doublearray N/A G component of the outColor. Note that this is not a child attribute of outColor.
outColorB (ocb)n doublearray N/A B component of the outColor. Note that this is not a child attribute of outColor.
outColorV (ocv) doublearray N/A V component of the outColor in HSV color space. Note that this is not a child attribute of outColor.
outNormal (on) vectorarray N/A Output normal value. Always a unit vector.
outNormalX (onx) doublearray N/A X component of outNormal.
outNormalY (ony) doublearray N/A Y component of outNormal.
outNormalZ (onz) doublearray N/A Z component of outNormal.

Here a per particle field attribute sensitive attribute is an attribute you can customize the value just like you can do it for a field node attribute, i.e. you can create a per particle attribute which name is (texPPattrMapperNode)_(attribute name) and set per particle value for the attribute to customize the value.


... another example


... and two more examples
You can control particle movement with mesh face normals. (you can also use vertex normals)


Friday, May 2, 2008

Embedding C code in Python

There is a very interesting approach to speed up your Python code.
This is an example code I found in the PyInline document.


import PyInline

m = PyInline.build(code="""
double my_add(double a, double b) {
return a + b;
}
""", language="C")

print m.my_add(4.5, 5.5) # Should print out "10.0"

I don't think I need additional explanation on what this program does. It's quite obvious. Under the hood, it executes a compiler on the fly, creates a Python extension module (.pyd file), imports it immediately so that the script can use.

As far as I know, there are three tools.

PyInline http://pyinline.sourceforge.net/

scipy.weave http://www.scipy.org/

instant http://www.fenics.org/wiki/Instant

scipy.weave is the most famous one which is part of scipy, a numerical package (I looked if NumPy has inherited weave from scipy but couldn't find it). instant is based on swig.
If you are interested in reading the source code to see how they work, I recommend you read PyInline first since there are only three .py files.