ÀÚ·á°Ë»ö-Ç¥ÁØ

Ȩ > ÀڷḶ´ç > ÀÚ·á°Ë»ö > Ç¥ÁØ

ÀÚ·á °Ë»ö°á°ú

°Ë»öÆäÀÌÁö·Î
Ç¥ÁØÁ¾·ù Á¤º¸Åë½Å±â¼úº¸°í¼­(TTAR)
Ç¥ÁعøÈ£ TTAR-11.0082 ±¸ Ç¥ÁعøÈ£
Á¦°³Á¤ÀÏ 2022-10-26 ÃÑ ÆäÀÌÁö 24
ÇÑ±Û Ç¥Áظí ÀÓº£µðµå ½Ã½ºÅÛ¿ë ½Å°æ¸Á Ã߷п£Áø°£ ȣȯ¼ºÁ¦°øÀ» À§ÇÑ ½Å°æ¸Á ¸ðµ¨ÀÇ ¿ä±¸»çÇ×(±â¼úº¸°í¼­)
¿µ¹® Ç¥Áظí Requirements of Neural Network Model to provide interoperability between Neural Network Inference Engine on Embedded Systems
ÇÑ±Û ³»¿ë¿ä¾à ÀÓº£µðµå ½Ã½ºÅÛ»ó¿¡¼­ µ¿ÀÛÇÏ´Â Ã߷п£ÁøÀº ¼ÒÇÁÆ®¿þ¾î Á¦°ø ¾÷ü ¹× °³¹ß ¾÷üº°·Î ´Ù¾çÇÏ´Ù. ÀÌ·¯ÇÑ Ãß·Ð ¿£ÁøÀº ÀÓº£µðµå ½Ã½ºÅÛÀÇ Æ¯¼º»ó ÀÚ¿ø Á¦¾à¼ºÀÌ ÀÖ´Â ´Ù¾çÇÑ Çϵå¿þ¾î¿¡¼­ ½ÇÇàµÇ¾î¾ß ÇÑ´Ù. ÀϹÝÀûÀ¸·Î Ã߷п£ÁøµéÀº µ¶ÀÚÀûÀÎ ½Å°æ¸Á ¸ðµ¨À» »ç¿ëÇϸç, Çϵå¿þ¾î¿¡ µû¶ó »ç¿ëÇÒ ¼ö ÀÖ´Â ½Å°æ¸Á ¸ðµ¨°ú µ¥ÀÌŸ Æ÷¸Ë µî¿¡¼­ Á¦¾àÀÌ ÀÖÀ» ¼ö ÀÖ´Ù. º» º¸°í¼­¿¡¼­´Â ÀÌ·¯ÇÑ ÀÓº£µðµå ½Ã½ºÅÛ»ó Çϵå¿þ¾î¿Í Ã߷п£Áø»çÀÌÀÇ »óȣȣȯÀ» °¡´ÉÇÏ°Ô ÇÏ´Â ½Å°æ¸Á ¸ðµ¨ÀÇ ¿ä±¸»çÇ×À» Á¦½ÃÇÑ´Ù.
¿µ¹® ³»¿ë¿ä¾à Inference engines which are operating on embedded systems vary by software providers and developers. These inference engines must be executed on various hardware. that has resource constraints due to the nature of the embedded systems. In general, each inference engine uses its own neural network model format, and may have restrictions on the data format of the neural network model that can be used depending on the hardware.
This document defines the requirements of the neural network model that enables interoperability among hardware and inference engines on embedded systems.
°ü·Ã IPR È®¾à¼­ Á¢¼öµÈ IPR È®¾à¼­ ¾øÀ½
°ü·ÃÆÄÀÏ    TTAR-11.0082.pdf TTAR-11.0082.pdf
Ç¥ÁØÀÌ·Â
Ç¥Áظí Ç¥ÁعøÈ£ Á¦°³Á¤ÀÏ ±¸ºÐ À¯È¿
¿©ºÎ
IPR
È®¾à¼­
ÆÄÀÏ
ÀÓº£µðµå ½Ã½ºÅÛ¿ë ½Å°æ¸Á Ã߷п£Áø°£ ȣȯ¼ºÁ¦°øÀ» À§ÇÑ ½Å°æ¸Á ¸ðµ¨ÀÇ ¿ä±¸»çÇ×(±â¼úº¸°í¼­) TTAR-11.0082 2022-10-26 Á¦Á¤ À¯È¿ ¾øÀ½ TTAR-11.0082.pdf