Stanford NLP Chinese(中文)的使用_twenz for higher_百度空間
Stanford NLP Chinese(中文)的使用
Stanford NLP tools提供了處理中文的三個工具,分別是分詞、Parser;具體參考:
http://nlp.stanford.edu/software/parser-faq.shtml#o
?
1.分詞 Chinese segmenter
下載:http://nlp.stanford.edu/software/
Stanford Chinese Word Segmenter A Java implementation of a CRF-based Chinese Word Segmenter
這個包比較大,運行時候需要的內存也多,因而如果用eclipse運行的時候需要修改虛擬內存空間大小:
運行-》自變量-》VM自變量-》-Xmx800m (最大內存空間800m)
demo代碼(修改過的,未檢驗):
??? Properties props = new Properties();
??? props.setProperty("sighanCorporaDict", "data");
??? // props.setProperty("NormalizationTable", "data/norm.simp.utf8");
??? // props.setProperty("normTableEncoding", "UTF-8");
??? // below is needed because CTBSegDocumentIteratorFactory accesses it
??? props.setProperty("serDictionary","data/dict-chris6.ser.gz");
??? //props.setProperty("testFile", args[0]);
??? props.setProperty("inputEncoding", "UTF-8");
??? props.setProperty("sighanPostProcessing", "true");
?? ?
??? CRFClassifier classifier = new CRFClassifier(props);
??? classifier.loadClassifierNoExceptions("data/ctb.gz", props);
??? // flags must be re-set after data is loaded
??? classifier.flags.setProperties(props);
??? //classifier.writeAnswers(classifier.test(args[0]));
??? //classifier.testAndWriteAnswers(args[0]);
?? ?
??? String result = classifier.testString("我是中國人!");
??? System.out.println(result);?
2. Stanford Parser
可以參考http://nlp.stanford.edu/software/parser-faq.shtml#o
http://blog.csdn.net/leeharry/archive/2008/03/06/2153583.aspx
根據輸入的訓練庫不同,可以處理英文,也可以處理中文。輸入是分詞好的句子,輸出詞性、句子的語法樹(依賴關系)
英文demo(下載的壓縮文件中有):
??? LexicalizedParser lp = new LexicalizedParser("englishPCFG.ser.gz");
??? lp.setOptionFlags(new String[]{"-maxLength", "80", "-retainTmpSubcategories"});
??? String[] sent = { "This", "is", "an", "easy", "sentence", "." };
??? Tree parse = (Tree) lp.apply(Arrays.asList(sent));
??? parse.pennPrint();
??? System.out.println();
??? TreebankLanguagePack tlp = new PennTreebankLanguagePack();
??? GrammaticalStructureFactory gsf = tlp.grammaticalStructureFactory();
??? GrammaticalStructure gs = gsf.newGrammaticalStructure(parse);
??? Collection tdl = gs.typedDependenciesCollapsed();
??? System.out.println(tdl);
??? System.out.println();
??? TreePrint tp = new TreePrint("penn,typedDependenciesCollapsed");
??? tp.printTree(parse);中文有些不同:
? //LexicalizedParser lp = new LexicalizedParser("englishPCFG.ser.gz");
??? LexicalizedParser lp = new LexicalizedParser("xinhuaFactored.ser.gz");
??? //lp.setOptionFlags(new String[]{"-maxLength", "80", "-retainTmpSubcategories"});
??? //??? String[] sent = { "This", "is", "an", "easy", "sentence", "." };
??? String[] sent = { "他", "和", "我", "在",? "學校", "里", "常", "打", "桌球", "。" };
??? String sentence = "他和我在學校里常打臺球。";
??? Tree parse = (Tree) lp.apply(Arrays.asList(sent));
??? //Tree parse = (Tree) lp.apply(sentence);
? ?
??? parse.pennPrint();
?? ?
??? System.out.println();
/*
??? TreebankLanguagePack tlp = new PennTreebankLanguagePack();
??? GrammaticalStructureFactory gsf = tlp.grammaticalStructureFactory();
??? GrammaticalStructure gs = gsf.newGrammaticalStructure(parse);
??? Collection tdl = gs.typedDependenciesCollapsed();
??? System.out.println(tdl);
??? System.out.println();
*/
??? //only for English
??? //TreePrint tp = new TreePrint("penn,typedDependenciesCollapsed");
??? //chinese
??? TreePrint tp = new TreePrint("wordsAndTags,penn,typedDependenciesCollapsed",new ChineseTreebankLanguagePack());
??? tp.printTree(parse);然而有些時候我們不是光只要打印出來的語法依賴關系,而是希望得到關于語法樹(圖),則需要采用如下的程序:
?? ??? ?String[] sent = { "他", "和", "我", "在",? "學校", "里", "常", "打", "桌球", "。" };
?? ??? ?ParserSentence ps = new ParserSentence();
?? ??? ?Tree parse = ps.parserSentence(sent);
?? ??? ?parse.pennPrint();
?? ??? ?TreebankLanguagePack tlp = new ChineseTreebankLanguagePack();
?? ???? GrammaticalStructureFactory gsf = tlp.grammaticalStructureFactory();
?? ???? GrammaticalStructure gs = gsf.newGrammaticalStructure(parse);
?? ???? Collection tdl = gs.typedDependenciesCollapsed();
?? ???? System.out.println(tdl);
?? ???? System.out.println();
?? ???? for(int i = 0;i < tdl.size();i ++)
?? ???? {
?? ??? ??? ?//TypedDependency(GrammaticalRelation reln, TreeGraphNode gov, TreeGraphNode dep)
?? ??? ??? ?TypedDependency td = (TypedDependency)tdl.toArray()[i];
?? ??? ??? ?System.out.println(td.toString());
?? ???? }//采用GrammaticalStructure的方法 getGrammaticalRelation ( TreeGraphNode ?gov, TreeGraphNode ?dep)可以獲得兩個詞的語法依賴關系
更多文章、技術交流、商務合作、聯系博主
微信掃碼或搜索:z360901061

微信掃一掃加我為好友
QQ號聯系: 360901061
您的支持是博主寫作最大的動力,如果您喜歡我的文章,感覺我的文章對您有幫助,請用微信掃描下面二維碼支持博主2元、5元、10元、20元等您想捐的金額吧,狠狠點擊下面給點支持吧,站長非常感激您!手機微信長按不能支付解決辦法:請將微信支付二維碼保存到相冊,切換到微信,然后點擊微信右上角掃一掃功能,選擇支付二維碼完成支付。
【本文對您有幫助就好】元
