Not quite. Most rendering functionality is in Neo::StandardRenderer and Neo::FixedRenderer. Neo::NeoGame just sets up the environment for those classes to render correctly. This way it is possible to redirect the output to a texture or to allow layering two scenes. Neo::NeoGame also sets up post processing if it is active and redirects all output onto there.
Both renderers only access the abstract rendering context to allow API independence. You could write an interface for Direct3D, GLES, GL, Mantle or any other API for that matter this way, without changing one line of code anywhere else. The only problem would be replacing the GLSL shaders since those do not work with D3D of course.
Every behavior also has a “draw” method that allows you to draw custom graphics in your behaviors. The particle system behavior does that for example.