Pages

Thursday, October 05, 2006

The beginnings...

How do you define a virtual museum?

(2006) From Wikipedia, the free encyclopedia

"Avirtual museum (sometimes web museum) is an online website with a collection of objects (real or virtual) or exhibitions. They include contemporary, historical and sometimes artistic content. Examples include the Virtual Museum of Computing. Some are produced by enthusiastic individuals such as the Lin Hsin Hsin Art Museum; others, like the UK's 24 Hour Museum and the Virtual Museum of Canada, are professional endeavours."

"A virtual museum is a collection of electronic artifacts and information resources - virtually anything which can be digitized. The collection may include paintings, drawings, photographs, diagrams, graphs, recordings, video segments, newspaper articles, transcripts of interviews, numerical databases and a host of other items which may be saved on the virtual museum's file server. It may also offer pointers to great resources around the world relevant to the museum's main focus."

2 comments:

Museums have always recorded information abouttheir content, by individual items, collections, andexhibits. With the advent of photography, andespecially recently with digital photography,museums increasingly record 2D pictures of itemsand sometimes scenes to complement textdescriptions. In addition to using this descriptiveinformation for their own uses, museums arebeginning to make some of this 2D content availablevia the Web. The ability to conveniently takemultiple photographic views and laser scannedrepresentations of single objects has made possibleincreasing realistic and accurate recordings ofobjects. These methods allow for the capture notjust of the visual appearance of the object, but alsoan accurate 3D spatial representation. This spatialinformation is of high enough quality to allowscholarly study and comparison of objects (Rowe2003b). The methodology in this paper builds onprevious work to capture both visually accurateinformation (photographic texture and color) andspatially accurate information (laser scanning) andintegrate them into a combined virtual reality model.Below we discuss the different methodologies usedto capture 3D representations of objects and scenes.It is important to distinguish true 3D scene scanningfrom methods that capture multiple 2D images, andstitch them together for a panoramic view orinterpolate between them to estimate other views.Sets of 2D images do not capture the spatialinformation in a true 3D scan, nor do they permit theviewing of the 3D scene from arbitrary viewpoints,or with arbitrary choices of lighting andvisualization conditions. The methodologyproposed in this paper as part of our Virseum projectcaptures museum exhibits (setting and artifacts)precisely. We use techniques that capture spatialgeometry accurately (laser range finder covering afull 360 scan in the azimuth and 270 degreeselevation), plus 2D high quality images to capturecolor and texture of polygonal surfaces in the scene(tied to laser range finder data), and very highquality 2D images for capturing the texture color forimportant object close-ups (paintings, sculptures,etc).A 3D spatial model of a scene may be constructedseveral ways. The goal is to “produce a seamless,occlusion-free, geometric representation of theexternally visible surfaces of an object”, or in thegeneral case a collection of objects (Levoy 1997).Modeling a scene by abstracting objects as simplegeometric surfaces (such as with a computer aideddesign program) makes the representation of thescene simpler (fewer triangles describing surfaces).The tradeoff is that it is not as accurate (abstractionrather than measured), and it is simplistic inappearance because of the simpler representation ofsurfaces and their textures. Examples include earlywork at creating models of historic sites, or the moresimplistic movie special effects of early computeranimation films. More accurate and realistic modelscan be generated by sensor readings of a scene.These fall into two categories: passive sensing(camera recorded images) and active sensing (laserrange finder recorded spatial coordinates). A gooddiscussion of active sensing versus passive sensingis given in Levoy (1997). Passive sensing requiresreconstructing a scene by solving for sceneillumination, sensor geometry, object geometry, andobject reflectance given multiple static 2Dphotographs taken of a scene. This continues to be adifficult to solve problem in computer visionprimarily because it requires accurately findingcorresponding features (points) between the differentimages. Active sensing devices such as laser rangefinders can be used to produce lattices ofmeasurements of distance from the sensorlocation(s) to objects in the scene. The challengingpart of this process is reducing the “clouds” of pointsmeasured by the multiple scans into a small enoughnumber of polygons for real-time rendering. This isdone by discarding redundant points from multiplescans, and by combining very small polygons intolarger polygons when appropriate (e.g. large flatsurfaces such as walls).