Ogre材质解析代码初步分析(二)

来源:互联网 发布:安卓模拟器对比 知乎 编辑:程序博客网 时间:2024/05/17 05:01

本文参考:http://www.cnblogs.com/yzwalkman/archive/2013/01/02/2841607.html 尊重原创


(本文代码本来有很多代码行中文注释的,可是CSDN这个编辑器太坑了,格式全乱了,不愿一条一条添回来,见谅)

Ogre对脚本的解析方法分为:词法分析,解析或称语义分析和编译三个过程。了解过编译原理的同学都知道,这个过程和程序源文件的编译过程很相像,不过要简单的多罢了。“词法分析”和“语义分析”在Ogre脚本解析中,比编译程序员文件要简单的多,只是根据Ogre定义的规则,把脚本文件关键字和值组织成特定的结构。而“编译”则与编译源文件完全不一样,这里的编译是指把前两步分析出来的结构解析成代表.material、.program、.particle和.compositor的类对象而已。而代码则体现在Ogre的ScriptCompiler::compile(...)函数里面,下面看看函数实现。


bool ScriptCompiler::compile(const String &str, const String &source, const String &group){ScriptLexer lexer;//词法分析器ScriptParser parser;//语法分析器ConcreteNodeListPtr nodes = parser.parse(lexer.tokenize(str, source));return compile(nodes, group);}

compile函数的第一个参数str是指(Ogre材质解析代码初步分析(一)最后阶段获得)脚本文件的字符流串,source是指资源名称,group是指所属资源组名。要解析的脚本文件首先由词法分析器的tokenize()函数进行处理,下面看看源代码。

ScriptTokenListPtr ScriptLexer::tokenize(const String &str, const String &source){// State enumsenum{ READY = 0, COMMENT, MULTICOMMENT, WORD, QUOTE, VAR, POSSIBLECOMMENT };// Set up some constant characters of interest#if OGRE_WCHAR_T_STRINGSconst wchar_t varopener = L'$', quote = L'\"', slash = L'/', backslash = L'\\', openbrace = L'{', closebrace = L'}', colon = L':', star = L'*', cr = L'\r', lf = L'\n';wchar_t c = 0, lastc = 0;#elseconst wchar_t varopener = '$', quote = '\"', slash = '/', backslash = '\\', openbrace = '{', closebrace = '}', colon = ':', star = '*', cr = '\r', lf = '\n';char c = 0, lastc = 0;#endifString lexeme;uint32 line = 1, state = READY, lastQuote = 0;ScriptTokenListPtr tokens(OGRE_NEW_T(ScriptTokenList, MEMCATEGORY_GENERAL)(), SPFM_DELETE_T);// Iterate over the inputString::const_iterator i = str.begin(), end = str.end();while(i != end){lastc = c;c = *i;if(c == quote)lastQuote = line;switch(state){case READY:if(c == slash && lastc == slash){// Comment start, clear out the lexemelexeme = "";state = COMMENT;}else if(c == star && lastc == slash){lexeme = "";state = MULTICOMMENT;}else if(c == quote){// Clear out the lexeme ready to be filled with quotes!lexeme = c;state = QUOTE;}else if(c == varopener){// Set up to read in a variablelexeme = c;state = VAR;}else if(isNewline(c)){lexeme = c;setToken(lexeme, line, source, tokens.get());}else if(!isWhitespace(c)){lexeme = c;if(c == slash)state = POSSIBLECOMMENT;elsestate = WORD;}break;case COMMENT:// This newline happens to be ignored automaticallyif(isNewline(c))state = READY;break;case MULTICOMMENT:if(c == slash && lastc == star)state = READY;break;case POSSIBLECOMMENT:if(c == slash && lastc == slash){lexeme = "";state = COMMENT;break;}else if(c == star && lastc == slash){lexeme = "";state = MULTICOMMENT;break;}else{state = WORD;}case WORD:if(isNewline(c)){setToken(lexeme, line, source, tokens.get());lexeme = c;setToken(lexeme, line, source, tokens.get());state = READY;}else if(isWhitespace(c)){setToken(lexeme, line, source, tokens.get());state = READY;}else if(c == openbrace || c == closebrace || c == colon){setToken(lexeme, line, source, tokens.get());lexeme = c;setToken(lexeme, line, source, tokens.get());state = READY;}else{lexeme += c;}break;case QUOTE:if(c != backslash){// Allow embedded quotes with escapingif(c == quote && lastc == backslash){lexeme += c;}else if(c == quote){lexeme += c;setToken(lexeme, line, source, tokens.get());state = READY;}else{// Backtrack here and allow a backslash normally within the quoteif(lastc == backslash)lexeme = lexeme + "\\" + c;elselexeme += c;}}break;case VAR:if(isNewline(c)){setToken(lexeme, line, source, tokens.get());lexeme = c;setToken(lexeme, line, source, tokens.get());state = READY;}else if(isWhitespace(c)){setToken(lexeme, line, source, tokens.get());state = READY;}else if(c == openbrace || c == closebrace || c == colon){setToken(lexeme, line, source, tokens.get());lexeme = c;setToken(lexeme, line, source, tokens.get());state = READY;}else{lexeme += c;}break;}// Separate check for newlines just to track line numbersif(c == cr || (c == lf && lastc != cr))line++;i++;}// Check for valid exit statesif(state == WORD || state == VAR){if(!lexeme.empty())setToken(lexeme, line, source, tokens.get());}else{if(state == QUOTE){OGRE_EXCEPT(Exception::ERR_INVALID_STATE, Ogre::String("no matching \" found for \" at line ") + Ogre::StringConverter::toString(lastQuote),"ScriptLexer::tokenize");}}return tokens;}

函数的返回值ScriptTokenListPtr是经过逐字符的分析之后的词集合。这里的词法分析的主要目的,是将脚本文件中的各个词素(lexeme 比如,一个单词、脚本中大括号的左半边、脚本中大括号的右半边等都被看一个词素)解读出来,并针对每个词素生成一个token对象,将此词素的相关信息保存在token对象中。
生成token对象并保存相应词素信息的过程由ScriptLexer::setToken()函数来实现,下面是源代码:


void ScriptLexer::setToken(const Ogre::String &lexeme, Ogre::uint32 line, const String &source, Ogre::ScriptTokenList *tokens){#if OGRE_WCHAR_T_STRINGSconst wchar_t openBracket = L'{', closeBracket = L'}', colon = L':', quote = L'\"', var = L'$';#elseconst char openBracket = '{', closeBracket = '}', colon = ':', quote = '\"', var = '$';#endifScriptTokenPtr token(OGRE_NEW_T(ScriptToken, MEMCATEGORY_GENERAL)(), SPFM_DELETE_T);token->lexeme = lexeme;token->line = line;token->file = source;bool ignore = false;// Check the user token map firstif(lexeme.size() == 1 && isNewline(lexeme[0])){token->type = TID_NEWLINE;if(!tokens->empty() && tokens->back()->type == TID_NEWLINE)ignore = true;}else if(lexeme.size() == 1 && lexeme[0] == openBracket)token->type = TID_LBRACKET;else if(lexeme.size() == 1 && lexeme[0] == closeBracket)token->type = TID_RBRACKET;else if(lexeme.size() == 1 && lexeme[0] == colon)token->type = TID_COLON;else if(lexeme[0] == var)token->type = TID_VARIABLE;else{// This is either a non-zero length phrase or quoted phraseif(lexeme.size() >= 2 && lexeme[0] == quote && lexeme[lexeme.size() - 1] == quote){token->type = TID_QUOTE;}else{token->type = TID_WORD;}}if(!ignore)tokens->push_back(token);}

这里token对象保存的lexeme就是tokenize()函数里面每一个while循环产生的词素,line保存所在的行数,file保存资源名,type保存这词素的类型,类型定义如下。

enum{TID_LBRACKET = 0, // {TID_RBRACKET, // }TID_COLON, // :TID_VARIABLE, // $...TID_WORD, // *TID_QUOTE, // "*"TID_NEWLINE, // \nTID_UNKNOWN,TID_END};

以上就是全部的词法分析部分,下一篇,将介绍语义分析阶段。

0 0
原创粉丝点击