Hello, I have read some papers about T-spline recently. In paper "T-spline simplification and Local Refinement",I have some problems about Blending Function refinement and T-splines spaces. (1) when we insert several knots in s and t, how can we compute the blending functions. Need we comput them by spliting it into two blending function every time? for example : orginal knot sequence:[s0,s1,s2,s3,s4] ,[t0,t1,t2,t3,t4] after refinement: [s0,k1,s1,k2,s2,k3,s3,k4,s4] [t0,m1,t1,m2,t2,t3,t4]; now N[s0,s1,s2,s3,s4] can be written as a linear combination defined over the substrings of length 5 in [s0,k1,s1,k2,s2,k3,s3,k4,s4] ,what shall we do? (2) T-spline space: what is the inflence on matrix M12 when the control points are in different order sequence? (3) Implenting the algorithm, can you give some advice about the data structure?

leoniewang wrote:(1) when we insert several knots in s and t, how can we compute the blending functions. Need we comput them by spliting it into two blending function every time? for example : orginal knot sequence:[s0,s1,s2,s3,s4] ,[t0,t1,t2,t3,t4] after refinement: [s0,k1,s1,k2,s2,k3,s3,k4,s4] [t0,m1,t1,m2,t2,t3,t4]; now N[s0,s1,s2,s3,s4] can be written as a linear combination defined over the substrings of length 5 in [s0,k1,s1,k2,s2,k3,s3,k4,s4] ,what shall we do?

For the purposes of the local refinement algorithm you will want to split the blending function into two pieces every time. While it is possible to refine a blending function (they can, after all, be expressed as a NURBS surface) it doesn't make any sense to do so in this context. The whole point of inserting knots into the blending function is to "give" some influence to the newly inserted points and this requires splitting the blending functions.

leoniewang wrote:(2) T-spline space: what is the inflence on matrix M12 when the control points are in different order sequence?

I believe you are referring to equation 11 in the local refinement paper. I'm not sure I understand your question correctly, but I'll try to answer:

Permutations of P permute the columns of M12 in the same fashion

Permutations of P-tilde permute the rows of M12 in the same fashion

But this is just standard linear algebra... so maybe I've misunderstood your question?

leoniewang wrote:(3) Implenting the algorithm, can you give some advice about the data structure?

This is something that I really can't offer much advice on as there is no concise answer. Giving data structure advice is really beyond the scope of this forum.

Hello, Nick Thank you very much . But now I have another question. when new knots are inserted into the interval, theoretically,the equations (11)-(15) in the paper give the the relation P and P-tilde. My question is about the computation of the P-tilde. Should I insert all desired points into the T-mesh with no violations firstly, then use the equaitons (15) to compute the P-tilde ? Or every time inserting one point into the T-mesh and then Computing the P-tilde?

To answer your question directly: you may insert all of your knots simultaneously, if you wish, so long as you can properly track and resolve all violations.

However, let me offer some advice. I view equations (11)-(15) as the theoretical justification for the algorithm in section 4.3. I don't think it's a good way to actually implement the algorithm. Instead, I'd suggest you review section 4.3 carefully and follow the steps presented there if you are actually looking to implement (or understand) the algorithm.

I'll repeat the steps here:

Insert all desired control points into the T-mesh.

If any blending function is guilty of Violation 1, perform the necessary knot insertions into that blending function.

If any blending function is guilty of Violation 2, add an appropriate control point into the T-mesh.

Repeat Steps 2 and 3 until there are no more violations.

Here are some notes on these steps:

Step 1 suggests that you insert all desired control points into the mesh. All of these new control points will be guilty of Violation 3 since they should have no blending functions to start with. Note also that they do not even have geometric locations at this point.

Step 2 requires that you examine all points that do have blending functions and see if a knot was inserted (in Step 1 or Step 3) that requires a blending function to be split. See section 4.1 for a discussion of how blending functions are refined.

Refining a blending function means that it will "give" a portion of itself to its neighbors. This is how Violation 3 is resolved without explicitly addressing it.

Step 3 requires that we add control points to the mesh if a blending function has a knot that doesn't already correspond to a control point. These knots arise as blending functions are refined and passed around. Occasionally a control point will receive a blending function that has a knot that doesn't match the mesh. This is how "extra" control points get added to the surface beyond the ones added in Step 1.

Step 4 is simply a loop on Steps 2 and 3.

So, how do we use all of this to calculate the points P-tilde? The answer is quite simple: As blending functions are refined and passed around, each affected control point will collect a series of scaled blending functions from "donor" control points from P. When all violations have been resolved, the blending function at each point is simply a linear combination of these scaled blending functions. Likewise, the new geometry for each point, P-tilde, is simply a linear combination of the donor points (each in P) using the same scales as the blending functions. Therefore, P-tilde doesn't need to be calculated using equations (11)-(15). Instead, just follow the algorithm in section 4.3 and you will produce P-tilde more naturally.

Hello,Nick; thank you very much for your advice. But I have another problem again. When many facets need to be refined ,for example, each marked facet is to be divided into for subfacets, my question is that: during the refine process, divide one facet each time or divide all the marked facet at the same time or divide one facet by inserting one point and then check if the point is of gulity of the violations.

If the division always uses the insertion algorithm, then the final surface should be the same regardless of the order in which the divisions occur. In our implementation, you could do the divisions in any of the ways suggested.

However, there is a little complication if you do it one-by-one - the topology could change due to the exactness requirements of the exact insertion algorithm, so you'd need to be careful to make sure that you're not performing insertions twice by mistake. For this reason, if your implementation can resolve all blending functions to large changes in geometry, it might be simpler to do the topological changes all at once, and then settle the blending function violations with one invocation of the insertion algorithm.

I would like to add my question here. "T-Spline Simplification & Local Refinement" mentions in eq (6)-(9) the c- and d-values for the blending function refinement. These are the scaling factors for both split blending function. Do these values change as I go through further refinement steps? Do I have to keep each of the c/d-value in memory as part of the blending function?

The scale factors c and d are only used for the refinement of blending functions. If you implement T-spline refinement, the most direct approach would be to use these values temporarily during the refinement process. Once refinement is complete, however, you will no longer need these values.

Each T-spline blending function can be determined directly from the mesh without needing to reference blending function scale values. The purpose of section 4.1 is to show the math behind B-spline basis function refinement rather than describing implementation. Focus on section 4.3 to get a better idea of implementation.

This is regarding Tom's reply on the order of subdivision. I think the order of subdivision sometimes matters. In the example below,

The plane has 4 faces (A, B, C and D) in the original configuration.

tsplines8.png (5.42 KiB) Viewed 37913 times

First I subdivided faces B and D.

tsplines5.png (5.62 KiB) Viewed 37913 times

Then, If I subdivide ONLY face C, a new L shaped face is created in face A. In this case I cant have 4 equally spaced subdivisions in A. Then I need to carefully insert new control points 'exactly' to get the missing edges.

tsplines6.png (5.78 KiB) Viewed 37913 times

But if I choose to subdivide BOTH the faces A and C at a time, each of A and C are divided into 4 new faces.

tsplines7.png (5.92 KiB) Viewed 37913 times

So, I think that the order of selecting the faces to be subdivided affects the process of refinement. This seems obvious because of the way refinement algorithm works. I think, the algorithm, first of all, calculates all the control points to be inserted based on the input of which faces to be subdivided, and then inserts them and resolves all the violations. Correct me if I am wrong.

Chenna wrote:So, I think that the order of selecting the faces to be subdivided affects the process of refinement. This seems obvious because of the way refinement algorithm works. I think, the algorithm, first of all, calculates all the control points to be inserted based on the input of which faces to be subdivided, and then inserts them and resolves all the violations. Correct me if I am wrong.

This is correct. In general, we add all the desired topology as a pre-process, attaching "empty" blending functions to the added topology, and then resolve all violations. It is worth noting that there are many correct ways of resolving all violations, and the choices made by the algorithm regarding the order and method used for resolving violations can make a large difference in the amount of resulting topology. This is an active research area; I believe Mike Scott recently defended a PhD thesis which included this topic.

I wonder if I could get any suggestion on using an efficient data structure for implementing T-splines in C++. I am struggling with what kind of data structure I should use and so towards resolving that issue I am trying to understand different data structures to implement in T-spline coding. Is the Face/Edge/Vertex data structure would be good enough? I know there are many different ways of doing it. But as you guys have already worked on this, I can always count on your suggestion. I will be very thankful for any kind of help towards this. Thanks in advance.

There are some fairly straightforward data structures if you are going to implement the simpler T-Spline surface without star points. Otherwise, things get a lot harder -- I honestly don't recommend implementing the fully general T-NURCCS surface; rather, I recommend using one of our plug-ins with an academic license and scripting it.

Adam,I don't think I will be needed to work on T-NURCCS in near future. For the time being my focus is only on simpler T-spline surfaces.Can you please suggest me some efficient data structures to implement simple T-Splines in C++? And please let me know if any library of those data structures available on internet.

Regarding the questions of data holding. I tried to use a relational scheme of vertex/edge/face and even set up some SQL-database functionality. If we do not talk about efficency, it is an inital solution, that gave me a simple way calculating the knot intervals in existing t-grids. On the other hand, adding new vertices was a bit more complex. I used MySQL, but using an naive in-memory database engine should work too.

As long as the t-grid is rectangular and each face is four-sided, it is possible to mix up the "geometrical" representation of the grid and the intervals calculations thereon.

I don't knot, if its a possible to use half-edge- or winged-edge data structures for T-NURCCs, but I guess there are some better straightforward data structures.

So, I think the easiest way to do the data structure when you're not using extraordinary points is to do the same thing as you do with NURBS -- just store a rectangular grid of points with complete knot vectors. All possible combinations of a knot in S and a knot in T mark a "node" in the grid. Then, for each node, you store two extra booleans: whether that node "blocks in T," and whether it "blocks in S." The meaning is as follows:

Blocks in S, blocks in T: This node is a control point.Blocks in S, does not block in T: This node is in the middle of a [parametrically] vertical edge.Blocks in T, does not block in S: This node is in the middle of a [parametrically] horizontal edge:Does not block in S nor T: This node is in the middle of a face.

When tracing the blending functions, you are looking for these "block" flags, and you keep on stepping until you find enough of them to infer the full blending function. All continuity breaks will align with the global knot vector.

I wouldn't bother storing full polygon topology for a regular T-Spline; the grid-based method is much easier to work with.

In the beginning I also thought of working with the similar matrix like structure you have suggested. But I had to drop that idea because it occupies too much space when we go for too much local refinement during the analysis. I was wondering if there is any better option.

Now I started working with FACE/EDGE/VERTEX data structure to eliminate redundant data storage, easy of traversing and also because of the fact that in future it will be easy to extend the code to include extraordinary points also.

Now I started working with FACE/EDGE/VERTEX data structure to eliminate redundant data storage, easy of traversing and also because of the fact that in future it will be easy to extend the code to include extraordinary points also..................

The matrix-like-structure indeed sounds promising if you use a sparse matrix structure. (non-blocking then should be 0)But what about refinement? When I want to refine the T-Spline, how would the insertion process work? Let's say when I would like to insert a knot between index 3 and 4, that would be (computationally) expensive, right?