Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Lexing separately is cleaner in that the lexer's only job is to produce tokens and then the parser's only job is to consume tokens and produce e.g. an AST. It also means that you can use that token stream for other things that don't require a full parse (e.g. for syntax highlighting). The disadvantage is that you're often doubling the allocations and increasing the memory footprint. I alternate between separate lexer, combined recursive descent parser and generated PEG parsers depending on what is more important: speed + maintainability, speed of execution or speed of development


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: