General GVC Area Questions

Scope

All PhD students in Geometric and Visual Computing (GVC) are expected to know the basic graphic principles and techniques (lightfield, ray-tracing and z-buffer scan-conversion, view control, scene graph computation, rendering pipeline operations) and the basic 2D graphics and 3D modeling terminology (topology, geometry), representation schemes (CSG, BSP, Voxels, octrees, triangle mesh) and techniques (key-frame animation, intersection calculations, convex hulls, marching cubes). They should also be aware of the major issues (geometric complexity, realtime performance, numeric accuracy, sampling artifacts) and research opportunities in the overall GVC area. The following is a list of examples of topics that students should be familiar with:

  • Light (physics, color, lightfield, propagation, surface properties, BRDF)
  • Rasterization (lighting, clipping, perspective, scanconversion, z-buffering, texture, shadows)
  • Acceleration techniques (strips, back face culling, occlusion, simplification, parallelism)
  • Linear algebra (vectors, points, lines, planes, changes of coordinate system, transformations)
  • Polygon processing (smoothing, refinement, resampling, area, center of mass)
  • Intersections between geometric primitives in 2D and 3D (edges, triangles, polygons, spheres...)
  • Topological operators and properties of point sets (open, interior, genus, manifold)
  • Morphological operations and properties (Hausdorff distance, offsetting, medial axis, rounding, Minkowski sum)
  • Enclosing bounds and triangulations of point clouds (convex hull, Delaunay, Voronoi, alpha-hull)
  • Triangle meshes (representation, traversals, normals, curvature, components, genus, geodesics)
  • Interpolating motion (linear, circular, screw)
  • Graphic User Interaction (camera manipulation, pick, scene manipulation, shape editing)
  • Non-photorealistic rendering (silhouettes, hatching)
References
  • Fundamentals of Computer Graphics (3rd  or 4th edition),  by P. Shirley et al., AK Peters
  • Computational Geometry: Algorithms and Applications, M. de Berg, M. van Kerveld, M. Overmars, O. Schwartzkopf, pp. 1-44, Springer, 1997.
  • Prof. Rossignac's lecture notes and selected papers: GeometryTopologyCurvesMeshesSolidsHausdorffMotions

In addition to these references, students should be familiar with the broad topics covered by papers in the recent proceedings of the ACM Siggraph.

Courses
Sample Questions
  • OPEN RESEARCH PROBLEMS: List the 5 most important open problems in the Geometric and Visual Computing area that you expect to be funded and researched actively over the next 10 years. Discuss the motivation and summarize the current state of the art. Explain the research issues and say what makes them hard. How would you attack them if you had to?
  • COMPARE REPRESENTATIONS OF SOLIDS: Describe the following five representation schemes for solids: Boundary representation, CSG, BSP, Voxel, Octree. For each, describe in broad terms a typical data-structure and give the details of an efficient algorithm for classifying a point against a solid. Compare these five representations in terms of storage, performance of point-classification, and convenience for interactive design.
  • PERFORMANCE ACCELERATION: Discuss the stages and factors that limit the performance of current generation hardware-assisted rendering pipeline architectures. Discuss all the general purpose graphic acceleration techniques that you know. For each, explain in broad terms the principles, fundamental algorithms, data-structures, expected benefits, and limitations.
  • IMAGE-BASED RENDERING: Assuming that you can track the head of a visitor in front of a large 2 megapixel screen, you wish to give her the impression that the screen is actually a windowthrough which you can see a beautiful garden (where nothing moves). Explain precisely how you would acquire the necessary data, how you would store it, how much storage would be necessary, how you would render the images in realtime, and how much rendering power you would need.
  • MORPHING: Consider two polygonal regions A and B. Describe in details four techniques for animating a morph between them. What does it mean for the morph to be smooth? What does it mean to be minimal? For each one of the techniques you described, discuss their smoothness, minimality, and domain.
  • PERSPECTIVE: The perspective transformation used for 3D rendering maps a model point P=(x,y,z) to a point (x',y',z') so that (x',y') are the coordinates of the point where P appears on the screen. Typically, z' is chosen as a monotonic function of z, so as to ensure that depth order is preserved for z-buffer based hidden surface removal. Company X has proposed to set z'=z, normalized so that its range varies between 0 and 2^32-1, and to represent Z' as an integer. Explain why this approach does not work.
  • SHADOW: Assume that the light source is the edge (u,v). It is not a point source, nor an area source. The scene comprises only two triangles connected to each other: (a,b,c) and (c,b,d). Both the light source and the triangles are above the floor. Explain very precisely how to compute the exact shadow (umbra) cast by the two triangles on the floor. In particular, provide a formal definition of the shadow and state its properties. Is it connected? Is it convex? Is it polygonal? Note that discrete methods which would use pixels or sample the edge are not appropriate. However, you may assume that geometric tools for generating simple shapes (rays, edge, planes, cones) and for computing their intersections are available.
  • REGISTRATION: Consider a viewpoint V and three screen locations (P,Q, and R) through which you see the three vertices (A,B,C) of an equilateral triangle of edge length L. Assume that you are given the 3D locations of V, P, Q, and R. Explain how you would compute a possible location for the vertices (A,B,C).
  • SELF-INTERSECTION: A piece of cloth is simulated by a triangle mesh and animated through physical simulation. However, the self-intersection detection algorithm is far too slow. Suggest a good algorithm that would detect self-intersections and explain in detail what it would do and how you would implement it.
  • MORPHING: An algorithm that morphs between two convex planar polygons places them into two parallel planes z=0 and z=1, computes their convex hull H and computes the morph at time t between 0 and 1 as the intersection of H and the plane z=t. Assuming that the polygons are represented as circular lists of vertices in counterclockwise order, design a linear-time algorithm for computing H. Propose a way of representing the morph that allows to compute it at time increments using as few expensive arithmetic operations (multiplications, divisions) as possible.
  • LIGHT: The intensity of perceived light coming from a point light source decays quadratically with distance. Explain why it is acceptable to assume (as we usually do in Computer Graphics) that the intensity of light preceived along an unobstructed ray remains constant. Discuss situations where this assumption is no longer appropriate. The amount of light emitted in direction V by a point on a smooth Lambertian surface with normal N is proportional to the dot-product of N with L. Explain why the intensity of light reflected by a Lambertian surface appears to be independent of orientation.
  • SIMILARITY: You are given a family of similar shapes, for example the 3D models of the femur bone scanned or a large population. You want to compute the average of these shapes. Please discuss several registration and shape averaging approaches and comment of their strength and weaknesses.
  • DEPTH: This question is about triangle rendering on a typical commodity PC graphics card. Be careful to notice which stage in rendering each sub-question is asking about.
    1. For triangles rendered in perspective, what is a typical treatment of the depth coordinate (usually z) by the vertex processor? What information does the vertex processor give to the next stage in the graphics pipeline?
    2. Describe the task of the triangle rasterization stage in a typical graphics card. Describe in particular how the depth information is treated.
    3. Describe how the per-fragment depth information is treated at the fragment processor to assure proper hidden surfaces.
    4. Explain why the particular depth value in (a) and (b) are used, and describe why problems can occur if the "obvious" depth is used.
  • GLIDER: We want to roller-blade on an oriented manifold triangle mesh M in 3D. Our feet are never to leave the surface. We want our head to remain vertical with respect to the surface of the mesh at the point of contact. Furthermore, we want to slide at a constant tangential velocity (no acceleration component in a tangential direction with respect to M). Assume that we start at a point P of edge (A,B) and that we just entered triangle T, which has vertices (A,B,C), with an initial velocity vector V that is tangential to T. Please describe in details how you will compute the time t and position P’ when we leave T, and also how to identify the edge through which we exit T. Now, assume that we exit T through edge (B,C) and enter triangle T’ with vertices (C,B,D). Describe in details how to compute our new velocity vector V’ in T’. For simplicity, assume that you never hit a vertex.
  • SHADOW: A point light source is located at (0,0,h). The floor is defined by z=0. Provide simple formulae for the coordinates (x’,y’,z’) of the floor-shadow P’ cast by point P=(x,y,z). Now express the transformation that takes P to P’ as the result of a multiplication by a 4x4 matrix (using homogeneous coordinates) followed by a division. Explain how such a formulation can be used for casting shadows of 3D objects, discuss its limitations, and compare it to other approaches for rendering shadows.
  • TEXTURE: A small company presented a graphics card which performs texturing as follows. For any textured triangle, it projects it to the screen. For each pixel within the triangle's projection, it linearly interpolates texture coordinates from the projected vertices and uses the interpolated texture coordinates (after proper scaling) to look up the color from the texture. Does this approach work (explain)? If it does not, how can it be fixed?
  • PROJECTION: Consider an ideal projector (with a point-sized bulb) that projects qualifier questions onto a screen. We don't assume anything about the orientation of the projector with respect to the screen (which means that the text on the screen will generally appear distorted). You take a picture of the projected text (again, with an ideal pinhole camera) from some (arbitrary) place in front of the screen, then do simple thresholding to extract the text and produce a slide out of your picture. Is it always possible to place the projector (with the new slide in it) so that the text shows up undistorted on the screen? Explain why.
  • OCCLUSION: What is the shape of the perspective projection of an unobstructed ball? Justify your answer. Provide the details of a simple geometric calculation that would establish precisely whether a ball of center G and radius r is hidden from a viewpoint V by a triangle having vertices A, B, and C. Use 3D vectors rather than coordinates in the formulation of your solution. Discuss how such a test could be used to accelerate graphics.
  • PERSPECTIVE: Consider a perspective transformation T that maps a point P=(x,y,z) to a point P'=dP/(d+z).
    1. Where are the viewpoint and the near and far clipping planes of the viewing frustum?
    2. What is the 3D image (result of a mapping by T) of the half-space z>0?
    3. What are the preimages by T of [0,d/2] and of [d/2,d]?
    4. What can you conclude? (Suppose for instance that your z-buffer had only one bit per pixel.)
    5. Prove that T maps planes onto planes.
    6. Why is this important?
  • LIGHTFIELD: Consider a lightfield L of a 3D object S. Let B be the subset of the rays of L that miss S.
    1. Suggest a mathematical expression of a good approximation A of S in terms of B.
    2. Can you guarantee that A contains S or is contained in S? Explain.
    3. Suggest an efficient algorithm for generating a triangle mesh that approximates A.
  • COLOR: Discuss which 3D graphic techniques are based on the assumption that the light reflected by a point P on object in direction T remains constant along the ray through P parallel to T, as long as the ray does not intersect an obstacle? Yet, we know that the energy dissipated by a point source diminishes with the square of the distance. So how can that assumption be true? Explain what is going on. Discuss the limitations of that assumption and their implication on the correctness of 3D graphics techniques.
  • COLLISION: Briefly describe an algorithm to perform collision detection between N concave 3D objects moving around on a heightfield terrain. What is the O() runtime of this algorithm? Describe 4 general techniques to speed up the operation of this algorithm in practice, and present a brief analysis of where speedup would occur.
  • NEIGHBOR: Having a number of sites in the plane, we want to find the one which is the closest (in the Euclidean metric) to a query point.
    1. Describe a heuristic based on space partitioning to speed up the queries of this kind for a given set of sites.
    2. Let's say that all the query points and sites live in some rectangle in the plane. How would you use the standard graphics hardware in a simple way to approximately (i.e. with pixel accuracy) answer the nearest neighbor queries?

Last updated 6/22/17.