Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In C++, apart from the placement new that others mentioned, it is possible to globally replace new and delete by writing the 4 functions

    void* operator new  ( std::size_t count );
    void* operator new[]( std::size_t count );
    void operator delete  ( void* ptr );
    void operator delete[]( void* ptr );
(from http://en.cppreference.com/w/cpp/memory/new/operator_new and http://en.cppreference.com/w/cpp/memory/new/operator_delete)

If you do it that way, you do not need to change any of the code that calls new and delete.

And I disagree about the 'relatively' in "I know that in C it is relatively easy to define your own memory management functions"

It is easy to piggy-back on an existing memory allocator, and it is fairly easy to make an implementation that works from a single thread, or an implementation that performs horribly from multiple threads, but it is not easy to make one that performs well from multiple threads and is useful.

[That "and is useful" is from the hacker in me. It is trivial to make what I think is a conforming implementation that multi-threads exceptionally well along the lines of (I am guessing at the prototypes):

    void * malloc(size_t t)            { return NULL;}
    void * calloc(int n, size_t t)     { return NULL;}
    void   free(void *m)               {}
    void * realloc( void *m, size_t t) { return NULL;}
but that would not be a very useful implementation]


I should clarify, I wasn't actually making my own malloc, I was only macro-switching it so that I could keep a list of all the malloc call returns. I was also interrupting free, so I could see which objects had been malloc'd but not free'd. I think I mentioned it was to track down memory leaks.

The example I quoted in Python's implementation, though, seems to hold truer - rather than calling malloc whenever it needs more space it allocates buffers and hands out memory from them, reducing the frequency of calls to malloc, and meaning that its garbage collector sometimes can't free as much RAM as you hoped (since there are some nearly-empty buffers that can't be released.) This makes running python instructions which need a little more RAM much faster, at the expense of complexity when RAM is short, of course. And overall the program should be faster, since it has fewer system calls (assuming that the code which hands out RAM is well optimised, which should be possible since it can be much simpler than the system malloc.)


... as long as you understand what's going on with dynamically loaded code, and you try to free something that was allocated by a module that uses a totally different allocator.

C++ standard meets modern runtimes. Wheee! :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: