PatentDe  


Dokumentenidentifikation EP1246124 15.03.2007
EP-Veröffentlichungsnummer 0001246124
Titel Verfahren und Gerät zur Anzeige eines Bildraumes
Anmelder Ricoh Co., Ltd., Tokyo, JP
Erfinder Iwasaki, Masajiro, Yokohama-shi, Kanagawa, JP
Vertreter derzeit kein Vertreter bestellt
DE-Aktenzeichen 60217748
Vertragsstaaten DE, FR, GB
Sprache des Dokument EN
EP-Anmeldetag 18.03.2002
EP-Aktenzeichen 022519342
EP-Offenlegungsdatum 02.10.2002
EP date of grant 24.01.2007
Veröffentlichungstag im Patentblatt 15.03.2007
IPC-Hauptklasse G06K 9/62(2006.01)A, F, I, 20051017, B, H, EP
IPC-Nebenklasse G06T 11/20(2006.01)A, L, I, 20051017, B, H, EP   G06F 17/30(2006.01)A, L, I, 20051017, B, H, EP   

Beschreibung[en]

The present invention generally relates to a technique for displaying an image space and, more particularly, to an image space display method and apparatus for spatially displaying features of image.

As for such a technique for displaying an image space, there are following techniques:

  1. 1) "A User Interface Visualizing feature Space for Content-Based Image Retrieval", Technical Report of IEICE, IE98-204; and
  2. 2) "Visual Interaction for Exploration in Information Space of Documents", Journal of JSSST, Vol.13.

The above-mentioned known technique 1) is a method for mapping image features on a display space of a 2-dimensional by reducing the number of dimensions of a multidimensional space according to principal component analysis. The above-mentioned known technique 2) introduces an idea of dynamic updating of visualized results in a visual classification technique where the updating is performed in response to a user operation, and the visual classification is given by arranging a large number of documents and keywords based on their mutual relationships.

However, the known technique 1) has a problem in that principal component analysis cannot be carried out when image features cannot be represented by vector data or when image similarity between images cannot be represented by a linear function. The known technique 2) has a drawback in that the number of keywords becomes large due to a large amount of computation, which makes a processing time lengthy, thereby making it unsuitable for interactive presentation.

On the other hand, when a plurality of features area extracted from an image so as to display the feature space thereof on a 3-dimensional or 3-dimensional display space, a user can easily grasp the feature space if the features are assigned to the respective dimensional axes so as to give sense to each dimensional axis.

US 6,121,969 discloses a similarity based database of images where images are ranked and correlated in correspondence to biological preattentive similarity.

The article "Applications of Quadtree, Octree and Binary Tree Decomposition Techniques to Shape Analysis and Pattern Recognition" by BB Chaudhuri, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-7 No. 6, December 1985 pg 652-661 describes that an n-dimensional binary tree decomposition of feature space can be used for efficient pattern classifier design.

It is a general object of the present invention to provide an improved and useful image space display method and apparatus in which the above-mentioned problems are eliminated.

A more specific object of the present invention is to provide an image space display method and apparatus which facilitates a user to grasp a feature space by assigning each feature to a respective one of dimensional axes of a display space.

In order to achieve the above-mentioned objects, there is provided according to the present invention an image-space display method and an image display apparatus as defined in the appended claims.

According to the present invention, when a plurality of features are used, the tree-structure is generated for each of the features and each tree structure is mapped in one-dimension so as to dimensional data corresponding to the number of the features. The dimensional data is used as the display coordinate-axes data so as to map the features in the dimensions of the display space. Thus, the image space displayed by the method of the present enables a user to easily grasp the feature space.

In the image space display method according to the present invention, dividing the display space may include the step of: obtaining distances between each point of the features and center points of two clusters with respect to each point; sorting each points according to a difference between the obtained distances; and dividing the display space into two clusters by setting a boundary of the clusters according to an order of the sorting.'

The boundary of the clusters may be set between points of which a difference of distances preceding and proceeding points is largest in the order of the sorting. A area corresponding to one of dimensions of the display space may be divided in accordance with distances to center points of the two clusters and a ratio of differences of the difference distances at the boundary of the clusters. The boundary of the clusters may correspond to a halfway point in the order of the sorting. The features may be three-dimensional data corresponding to form, texture and color of the images, and the features are displayed by using the three-dimensional data as display coordinate-axes data.

Other objects, features and advantages of the present invention will become more apparent from the following detailed description when read in conjunction with the accompanying drawings.

  • FIG. 1 is a block diagram of an image display apparatus according to a first embodiment of the present invention;
  • FIG. 2 is a block diagram of a part of the image display apparatus performing an image display process;
  • FIG. 3 is an illustration of a tree structure generated by the image display process;
  • FIG. 4 is an illustration of a 1-dimensional map of the tree structure shown in FIG. 3;
  • FIG. 5 is an illustration showing a relationship between 3-dimensional display space and features (form, texture, color) extracted from an image;
  • FIG. 6 is a flowchart of a feature-axis display space generation process according to the first embodiment of the present invention;
  • FIG. 7 is a flowchart of a display-space generation process I according to the first embodiment of the present invention;
  • FIG. 8 is a flowchart of a binary-tree generation process according to the first embodiment of the present invention;
  • FIG. 9 is a flowchart of a display-space generation process II according to the first embodiment of the present invention; and
  • FIG. 10 is a flow chart of a clustering process according to the first embodiment of the present invention.

FIRST EMBODIMENT

A description will now be given of a first embodiment of the present invention. FIG. 1 is a block diagram of an image display apparatus according to a first embodiment of the present invention. FIG. 2 is a block diagram of a part of the image display apparatus performing an image display process.

In FIG. 1, the image display apparatus comprises: a central processing unit (CPU) 1 which controls an entire operation of the image display apparatus; a read only memory (ROM) 2 which stores programs executed by the CPU 1; a random access memory (RAM) which stores dynamic data for executing the programs stored in the ROM 2 and serves as a work area when the programs are executed; a keyboard 4 and a mouse 5 as input devices; a monitor 6 as a display apparatus; an image display unit 8 which executes an image application 7; a peripheral interface (I/F) 9 which functions as an interface with a peripheral device 1 such as a digital camera or a scanner; and a network I/F 10 which functions as an interface with a network 12. These components are connected to each other via a bus 13 so as to be controllable by the CPU 1 so that those components together serves as a computer to carry out various functions of the present invention.

Additionally, a memory-media driving unit 15 such as a CD-ROM driving unit is connected to the bus 13. The memory-media driving unit 15 reads program codes from a memory medium such as a CD-ROM so as to load the program codes stored in the memory medium to the computer so that the computer carries out various functions of the present invention mentioned later.

The image display part 8 comprises, as shown in FIG. 2, a feature-extraction unit 81, a clustering unit 82, a tree-structure generation unit 83, a display-space generation unit 84 and a display-screen generation unit 85. The units 81-85 perform a feature-extraction process, a clustering process, a tree-structure generation process, a display-space generation process and a display-screen generation process, respectively, in accordance with the image application 7.

A description will now be given of processes performed by the image display part 8 having the above-mentioned structure. The processes include 1) a feature-extraction process, 2) a feature-space tree-structure extraction process and 3) an image display screen generation process.

1) feature-extraction process:

As a feature of an image, there are various features such as a histogram feature, an edge feature or a texture feature.

The present invention is applicable to any features.

Of course, the present application is applicable to features extracted from data other than image data. For example, the present invention is applicable to features extracted from text data. Here, a description will be given of an extraction process of a general histogram feature. As the image data from which features area extracted, there is image data supplied by the peripheral devices 11 such as a digital camera or a scanner, or image data downloaded from the Web. There is no limitation with respect to the input method.

First, a suitable color space (for example, Lab, Luv, HSV, etc.) is selected, and the selected color space is divided into a plurality of areas. Then, an investigation is performed as to which pixel of the image corresponds to which area of the color space. After counting the number of pixels for each area, the pixel number data is normalized based on the number of whole pixels. The normalized data of the number of pixels for each area corresponds to the histogram feature. The feature of a histogram serves as a point of the feature space of the histogram. As a distance between two features in the feature space of a histogram, a sum total of differences of the numbers of pixels for each corresponding area of the two features or a Euclid distance is generally use. In this way, the distance between features can be obtained.

2) Feature-space tree-structure extraction process:

A feature space is a high order dimension space, and cannot be simply displayed in 2-dimension on a screen. Thus, the structure of a feature space is first expressed by a tree-structure. Then, the feature space can be virtually expressed on a screen by mapping the tree structure on the space of the screen.

In order to express the feature space by a tree-structure, the feature space is clustered, and is divided into a plurality of sub-spaces so as to form nodes of the tree-structure. Further, each subspace (node) is clustered, and is divided into sub-spaces to form nodes. The tree structure of the feature space is can be generated by performing the above-mentioned operation recursively. That is, all features are arranged in the lowermost nodes, respectively, on an individual node basis. FIG. 3 shows an example of the thus-formed tree-structure. Although the number of nodes which divide the space may be any number equal to or greater than 2, the number of nodes here is set to 2 for the sake of convenience. In this tree structure, similar features are arranged close to each other, and a positional relationship of images on the image space is expressed by the tree structure.

As for a clustering method, the general Nearest Neighor method, the K-average algorithm method, etc. can be used. Although similar images are arranged close to each other in this tree structure, other accuracies depend on the accuracy of clustering Thus, a method for improving the accuracy of clustering, a clustering according to the following procedures can be used. It should be noted that the dividing number is set to 2 in this method.

(i) Acquisition of center point of cluster:

  • a) select an arbitrary point A in a space.
  • b) set the farthest point from the selected point A as a center point C1 of the first cluster; and
  • c) set the farthest point from the point C1 as a center point C2 of the second cluster.

(ii) Sort of points:

  • a) select an arbitrary point P in a space.
  • b) calculate distances between the selected point P and each of center points C1 and C2 of two clusters, and obtain a difference between the calculated distances as a difference distance; difference distance = | C 1 - P | - C 2 - P
  • c) repeat a) and b) so as to obtain the difference distance for all points; and
  • d) sort all points according to ascending order of the difference distances.

(iii) Division of point:

A difference between difference distances of opposite sides of the point sorted according to the difference distance is obtained so as to set a boundary of clusters between points of which distance is largest. The side of which difference distance is smaller than a distance to the boundary of cluster belongs to C1, and the side of which difference distance is larger than a distance to the boundary of cluster belongs to C2. Moreover, the tree structure should be well-balanced. That is, when it is desired to distribute images at an equal interval on a final display screen, the separation may be performed at the middle of the number of images. The thus-obtained two clusters are set as nodes of a tree so as to recursively perform the same processes (1), (2) and (3) for each node. A tree is generated by this operation, and finally each image belongs to a leaf node.

With the conventional technique, a screen is generated from the thus-generated one tree-structure. However, in the present invention, a tree structure is generated with a meaningful feature unit, and a tree structure in the binary-tree form is generated for each feature. The features having such a meaning are color, form or texture, and an axis of a screen generated by the following screen generation process corresponds to each feature.

3) Image display screen generation process:

After generating the tree structure, each tree structure is mapped in 1-dimension. The display area on a screen is a data area for mapping in 1-dimension.

The tree structure is traced from a root thereof, and a) the data area is divided into two so that the two child nodes are arranged in the respective areas. The area may be equally divided. In order to improve display accuracy, the dividing points may be decided in proportion to the number of nodes belonging to a child node or in proportion to a distance between a dividing point at the time of clustering and each center point. Furthermore, also in consideration of the distance of a gap between clusters, a area corresponding to the space of the gap may be set and the space is not provided with a child node. According to such a method, the display screen can express further accurate similarity.

b) Perform the above-mentioned 1) feature-extraction process and 2) feature-space tree-structure extraction process with respect to each child node.

Thus, by processing recursively, all tree structures are mapped in 1-dimension. FIG. 4 is an example of mapping of a tree structure shown in FIG. 3. In this example, the area is equally divided while tracing the tree structure.

By performing the above-mentioned process with respect to all tree structures, multidimensional data corresponding to the number of tree structures can be obtained. As for a screen display, since it is difficult to express in more than three dimensions, the data is preferably up to three dimensions. For example, if a tree-structure is generated based on the features of color, form an texture, the 3-dimensional display space having axes corresponding to color, form and texture shaft, respectively, can be generated.

FIG. 5 shows an example of a final spatial screen display. Each point shown in FIG. 5 is a location of the feature which is extracted from an image according to parameters of form, texture and color, and the image may be displayed at that location as it is. It should be noted that, although form, texture and color extracted from an image are rendered to be features in the present embodiment, the present invention is not limited to such a feature and can be applied to any features such as a feature extracted from text data.

A description will now be given, with reference to FIG. 6 through FIG. 10, of an image space display process according to the first embodiment of the present invention. The image space display process is performed by executing programs stored in the ROM 2 while the CPU 1 uses the RAM 3 as a work area. In the image space display process according to the present embodiment, a feature is assigned to each display dimension axis.

FIG. 6 is a flowchart of a process of generating a feature-axis display space. In this process, first, a feature A is selected and a call is made for a display-space generation process I so as to generate a 1-dimensional display space (step S101). Subsequently, the coordinates of the 1-dimensional display space of the feature A are acquired (step S102), and the acquired coordinates are set as coordinates of a display axis X (step S103). Additionally, a feature B is selected and a call is made to the display-space generation process I so as to generate a 1-dimensional display space (step S104). Then, the coordinates of the 1-dimensional display space of the feature B are acquired (step S105), and the acquired coordinates are set as coordinates of a display axis Y (step S106). Further, a feature C is selected and a call is made to the display-space generation process I so as to generate a 1-dimensional display space (step S107). Then, the coordinates of the 1-dimensional display space of the feature B are acquired (step S108), and the acquired coordinates are set as coordinates of a display axis Z (step S109). Finally, a 3-dimensional display screen is generated based on coordinate values of the display axes acquired in the process of steps S103, S106, and S109. It should be noted that, in this process, the process of steps S101-S103 makes the feature A to correspond to the coordinate axis X, the process of steps S104-S106 makes the feature B to correspond to the coordinate axis Y, and the process of steps S107-S109 makes the feature C to correspond to the coordinate axis Z.

FIG. 7 is a flowchart of the above-mentioned display-space generation process I. In this process, when image data is input, the image feature is extracted to the last image (steps S201 and S202) so as to obtain a set of features (feature space) (step S203). Subsequently, a binary-tree generation process shown in FIG. 8 is called .(step S204), and the binary-tree generation process is performed with respect to the set of the features (step S205). Then, a display-space generation process II shown in FIG. 9 is called based on root sets (nodes) of the binary tree and a display space as inputs (step S206), and the display space is generated by performing the display space generation process II (step S207).

The routine performed at the time of execution of the display space generation process I shown in FIG. 18 is shown in FIGS. 8 through 10. FIG. 8 is a flowchart of a binary-tree generation process, which is called in step S204 and performed in step S205. In this process, a call is made to the clustering process shown in FIG. 10 so as to generate two subsets (clusters) A and B by half-dividing a feature set calling (step S301). The clustering process has been described in the description of 2) feature-space tree-structure extraction process. When the subsets A and B are generated, each of the'subsets A and B is set as a child set (child node) of the feature set (step S302). Then, it is determined whether or not the feature element of the subset A is 1 (step S303). If the feature element of the subset A is 1, it is determined whether or not the feature element of the subset B is 1 (step S304). Moreover, if the feature element of the subset A is not 1 in step S303, the binary-tree generation process is called so as to perform the process after step S301 by regarding the subset A as a set, and also perform the process of step S304. This process is ended if the feature element of the subset B is 1 in step S304. If the feature element of the subset B is not 1, the binary-tree generation process is called in step S305 (step S306) so as to perform the process after step 301 by regarding the subset B as a set, and the process is ended.

FIG. 9 is a flowchart of the display-space generation process II, which is called in step S206 and performed in step S207. In this process, it is judged whether or not all the features are sets (leaf sets) belonging to the respective lowermost nodes (step S401). If they are leaf sets, the image is arranged in the display space (step S402), and the process is ended. On the other hand, if it is judged in step S401 that the features are not leaf sets, the display space is divided into subsets A and B (step S403). Then, each of the subsets A and B corresponding to children of the binary tree is assigned to a respective one of the sub-spaces A and B (step S404). Then, the display-space generation process II is called recursively by regarding the subset A as a set and the sub-space A as a display space (step S405). Then, the display-space generation process II is called recursively by regarding the subset B as a set and the sub-space B as a display space (step S406). The process of steps S403, S404 and S405 is repeated until a leaf set is formed, and the image is arranged in the display space after the leaf set is established (step S402).

FIG. 10 is a flowchart of the clustering process, which is performed in step S301. As described in the feature-space tree-structure extraction process, first, an arbitrary point A is selected (step S501), and a point farthest to the selected point A is set as point C1 (step S502). Then, a point farthest to the point C1 is set as point C2 (step S503), and an arbitrary point P, which has not been processed, is selected (step S504). Next, difference distances between the Point P and each of the points C1 and C2 are calculated (step S505). Then, the selected point P is rendered as a point, which has been processed (step S506). The process of steps S504 through S506 is repeated until unprocessed points are eliminated (step S507). After all points have been processed, all points are sorted according to an ascending order of the difference distances (step S508). Then, a difference between difference distances preceding and proceeding each point is calculated for all sorted points (step S509), and the set of points is divided at a cluster boundary where the difference between difference distances is maximum (step S510). It should be noted that, although the features are assigned to the display-space dimensional axes as shown in FIG. 6 in the present embodiment, the above-mentioned display-space generation process I may be called when the features are not assigned to the display dimensional axes.

The present invention is not limited to the specifically disclosed embodiments, and variations and modifications may be made without departing from the scope of the present invention as defined by the appended claims.


Anspruch[de]
Bildraum-Anzeigeverfahren, das umfasst: a) Extrahieren (S201) von Merkmalen aus Bildern; b) Erzeugen (S301-S306) einer Baumstruktur für jedes der Merkmale, die für jedes Bild in einem Blattknoten endet; c) Erzeugen von Dimensionsdaten, die der Anzahl der Merkmale entsprechen; d) Anzeigen der Dimensionsdaten über die bzw. auf den entsprechenden unterteilten Anzeigeräume als Anzeigekoordinatenachsen-Daten; e) Unterteilen (S206) des Anzeigeraums entsprechend der Baumstruktur durch Abbilden jeder Baumstruktur auf eine Dimension; und f) Anzeigen des Merkmalsraums durch Anzeigen der Bilder in entsprechenden Unterteilungen (S207) des unterteilten Anzeigeraums. Bildraum-Anzeigeverfahren nach Anspruch 1, dadurch gekennzeichnet, dass das Unterteilen des Anzeigeraums die folgenden Schritte umfasst: Erhalten von Abständen (S505) zwischen jedem Punkt der Merkmale und Mittelpunkten zweier Anhäufungen in Bezug auf jeden Punkt; Sortieren (S508) aller Punkte entsprechend einer Differenz zwischen den erhaltenen Abständen; und Unterteilen (S501) des Bildraums in zwei Anhäufungen bzw. Kluster durch Setzen einer Grenze der Anhäufungen bzw. Kluster entsprechend einer Sortierreihenfolge. Bildraum-Anzeigeverfahren nach Anspruch 2, dadurch gekennzeichnet, dass die Grenze der Anhäufungen zwischen Punkten gesetzt wird, für die eine Differenz zwischen Abständen, die Punkten vorhergehen und nachfolgen, in der Sortierreihenfolge am größten ist. Bildraum-Anzeigeverfahren nach Anspruch 3, dadurch gekennzeichnet, dass ein Bereich, der einer der Dimensionen des Anzeigeraums entspricht, entsprechend den Abständen zu Mittelpunkten von zwei Anhäufungen und entsprechend einem Verhältnis von Differenzen der Differenzabstände an der Grenze der Anhäufungen unterteilt wird. Bildraum-Anzeigeverfahren nach Anspruch 2, dadurch gekennzeichnet, dass die Grenze der Anhäufungen einem Punkt in der Mitte in der Sortierreihenfolge entspricht. Bildraum-Anzeigeverfahren nach Anspruch 1, dadurch gekennzeichnet, dass die Merkmale dreidimensionale Daten sind, die der Form, der Textur und der Farbe der Bilder entsprechen, und die Merkmale unter Verwendung der dreidimensionalen Daten als Anzeigekoordinatenachsen-Daten angezeigt werden. Bildraum-Anzeigevorrichtung zum Extrahieren von Merkmalen aus Bildern, die Mittel zum Ausführen aller Schritte nach einem der Ansprüche 1 bis 6 umfasst. Computerprogramm, das Programmcodemittel umfasst, die, wenn sie auf einem Computer ausgeführt werden, den Computer anweisen, das Verfahren nach einem der Ansprüche 1 bis 6 auszuführen.
Anspruch[en]
An image-space display method comprising: a) extracting (S201) features from images; b) generating (S301-S306) a tree/structure for each of the features, ending in a leaf node for each image ; c) generating dimension data corresponding to the number of the features; d) displaying the dimension data on the corresponding divided display spaces as display coordinate-axis data; e) dividing (S206) the display space according to the tree-structure by mapping each tree structure in one dimension; and f) displaying the feature space by displaying the images in respective divisions (S207) of the divided display space. The image space display method as claimed in claim 1, characterised in that dividing the display space includes the steps of: obtaining distances (S505) between each point of the features and centre points of two clusters with respect to each point; sorting (S508) all points according to a difference between the obtained distances; and dividing (S510) the display space into two clusters by setting a boundary of the clusters according to an order of the sorting. The image space display method as claimed in claim 2, characterised in that the boundary of the clusters is set between points of which a difference of distances preceding and succeeding points is largest in the order of the sorting. The image space display method as claimed in claim 3, characterized in that an area corresponding to one of dimensions of the display space is divided in accordance with distances to center points of the two clusters and a ratio of differences of the difference distances at the boundary of the clusters. The image space display method as claimed in claim 2, characterized in that the boundary of the clusters corresponds to a halfway point in the order of the sorting. The image space display method as claimed in claim 1, characterized in that the features are three-dimensional data corresponding to form, texture and color of the images, and the features are displayed by using the three-dimensional data as display coordinate-axes data. An image-space display apparatus for extracting features from images, comprising means for performing all of the steps of any one of claims 1 to 6. A computer program comprising program code means that, when executed on a computer, instructs the computer to perform the method of any one of claims 1 to 6.
Anspruch[fr]
Procédé d'affichage d'espace d'image comprenant les étapes consistant à: a) extraire (S210) des caractéristiques à partir d'images, b) générer (S301 à S306) une structure arborescente pour chacune des caractéristiques, se terminant en un noeud de terminaison pour chaque image, c) générer des données de dimensions correspondant au nombre de caractéristiques, d) afficher les données de dimensions sur les espaces d'affichage divisés correspondant en tant que données d'axes de coordonnées d'affichage, e) diviser (S206) l'espace d'affichage selon la structure arborescente en mappant chaque structure arborescente dans une dimension, et f) afficher l'espace de caractéristiques en affichant les images dans les divisions respectives (S207) de l'espace d'affichage divisé. Procédé d'affichage d'espace d'image selon la revendication 1, caractérisé en ce que la division de l'espace d'affichage comprend les étapes consistant à : obtenir des distances (S505) entre chaque point des caractéristiques et les points centraux de deux groupes par rapport à chaque point, classer (S508) tous les points en fonction d'une différence entre les distances obtenues, et diviser (S510) l'espace d'affichage en deux groupes en établissant une limite des groupes en fonction d'un ordre du classement. Procédé d'affichage d'espace d'image selon la revendication 2, caractérisé en ce que la limite des groupes est établie entre des points pour lesquels une différence des distances des points précédents et suivants est plus grande dans l'ordre du classement. Procédé d'affichage d'espace d'image selon la revendication 3, caractérisé en ce que une zone correspondant à une des dimensions de l'espace d'affichage est divisée conformément aux distances par rapport aux points centraux des deux groupes et par rapport à un rapport de différences des distances de différences au niveau de la limite des groupes. Procédé d'affichage d'espace d'image selon la revendication 2, caractérisé en ce que la limite des groupes correspond à un point à mi-chemin dans l'ordre du classement. Procédé d'affichage d'espace d'image selon la revendication 1, caractérisé en ce que les caractéristiques sont des données tridimensionnelles correspondant à la forme, à la texture et à la couleur des images, et les caractéristiques sont affichées en utilisant les données tridimensionnelles en tant que données d'axes de coordonnées d'affichage. Dispositif d'affichage d'espace d'image destiné à extraire des caractéristique à partir d'images comprenant un moyen destiné à exécuter toutes les étapes de l'une quelconque des revendications 1 à 6. Programme informatique comprenant un moyen de code de programme qui, lorsqu'il est exécuté sur un ordinateur, ordonne à l'ordinateur d'exécuter le procédé selon l'une quelconque des revendications 1 à 6.






IPC
A Täglicher Lebensbedarf
B Arbeitsverfahren; Transportieren
C Chemie; Hüttenwesen
D Textilien; Papier
E Bauwesen; Erdbohren; Bergbau
F Maschinenbau; Beleuchtung; Heizung; Waffen; Sprengen
G Physik
H Elektrotechnik

Anmelder
Datum

Patentrecherche

Patent Zeichnungen (PDF)

Copyright © 2008 Patent-De Alle Rechte vorbehalten. eMail: info@patent-de.com