`
1025250620
  • 浏览: 225927 次
  • 性别: Icon_minigender_1
  • 来自: 北京
社区版块
存档分类
最新评论

android dalvik vm alloc(转)

 
阅读更多
gagbage collection:
为了跟踪对象的使用情况,必须知道内存中的对象是否在被使用,这就需要一个标志指示对象是否正在使用,也就是mark bits。一种方案是每个对象有自己相关联的mark bits。还有一种是将对象和mark bits分开,有独立的存放mark bits的内存区域。当你将mark bits 与对象一起存放的时候,就会消耗更多的缓冲区。当你将mark bits分开存放的时候,就能使缓冲区更紧密。这是从宏观上来看。

android平台上我们必须总要考虑到所有的进程都在一个性能受限的设备上。有各自独立的进程、每个进程有自己的堆,每个堆自己独立地进行垃圾收集。

GC和共享:
mark bits被分开存放。这在android上是可行的方案。这是因为有zygote进程在。Zygote有自己的堆,这个堆是被共享的,所以如果让mark bits散布在zygote堆中,当我们执行GC的时候,zygote进程就会触及zygote中的页面并使他们变为不共享的,使它们变为dirty的,就会破坏设备上的存储系统的性能。使mark bits被分开存放的另一个好处是因为在大多数情况下进程并不在垃圾收集,事实上我们并不需要分配mark数组除非是在垃圾收集的时候。也就是说我们就可以匀出内存运行更多的应用程序。

以上都是关于内存管理整体概况。在介绍内存管理代码之前先介绍java中关于引用对象的内容。

引用对象封装了指向其它对象的连接。被指向的对象称为引用目标。所有引用对象都是抽象的java.lang.Reference类的子类的实例。Reference类家族图如图所示,包含了三个直接的子类:SoftReference,WeakReference,PhantomReference。SoftReference对象封装了对引用目标的“软引用”;WeakReference封装了对引用目标的“弱引用”;而PhantomReference封装了对引用目标的“影子引用。强引用禁止引用目标被垃圾收集,而软引用、弱引用和影子引用不禁止。

图中表示了一个SoftReference对象。一旦一个引用对象创建后,它将一直维持到它的引用目标的软引用、弱引用或影子引用,直到它被程序或垃圾收集器清除。要清除一个引用对象,程序或垃圾收集器只需要调用引用对象的clear()方法。通过清除引用对象,就切断了引用对象的软引用、弱引用或者影子引用。

垃圾收集器可以随意更改不是强可触及的对象的可触及性状态。如果对可触及性状态的改变有兴趣,可以把引用对象和引用队列关联起来。引用队列是java.lang.ReferenceQueue的一个实例,垃圾收集器在改变可触及性状态时会添加(编入队列)所涉及的引用对象。如图所示,当垃圾收集决定收集弱可触及对象的时候,它会清除WeakReference对象(执行WeakRefence的clear方法),可能立即就把这个WeakReference对象加入到它的引用队列中,也可能在某个稍后的某个时间加入。为了把引用对象加入到它所关联的队列中,垃圾收集器执行它的enqueue方法。enqueue方法是在超类Reference中定义的,只有在创建引用对象时关联了一个队列、并且仅当对象的enqueue方法第一次执行时,才把引用对象加入到这个队列中。在不同的情况下,垃圾收集器把软引用、弱引用和影子引用对象加入队列表示三种不同的可触及性状态的转换。这表示六种可触及状态、状态变化的详情如下所示:
Ÿ           强可触及 对象可以从根节点不通过任何引用对象搜索到。对象生命周期从强可触及状态开始,并且只要有根节点或者另外一个强可触及对象引用它,就保持强可触及状态。垃圾收集器不会试图回收强可触及对象占据的内存空间。
Ÿ           软可触及 对象不是强可触及的,但是可以从根节点开始通过一个或多个(未被清除的)软引用对象触及。垃圾回收器可能回收软可触及的对象所占据的内存。如果发生了,它会清除所有到此软可触及对象的软引用。当垃圾收集器清除一个和引用队列有关联的软引用对象时,它把该软引用对象加入队列。
Ÿ           弱可触及 对象既不是强可触及的也不是软可触及的,但是从根节点开始可以通过一个或多个(未被清除的)弱引用对象触及。垃圾回收器必须归还弱可触及对象所占据的内存。这发生的时候,它会清除所有到此弱可触及对象的弱引用。当垃圾收集器清除一个和引用队列有关联的弱引用对象时,它把该弱引用对象加入队列。
Ÿ           可复活的 对象不是强可触及、软可触及,也不是弱可触及的,但是仍可能通过某些终结方法复活到这几种状态之一。
Ÿ           影子可触及 对象不是强可触及、软可触及,也不是弱可触及的,并且已经断定不会被任何终结方法复活(如果它自己定义了终结方法,它的终结方法已经被运行过了),并且它可以从根结点开始通过一个或多个(未被清除的)影子引用对象触及。一旦某个被影子引用的对象变成影子可触及状态,垃圾收集器立即把该引用对象加入队列。垃圾收集器从不会清除一个影子引用,所有的影子引用都必须由程序明确地清除。
Ÿ           不可触及 一个对象不是强可触及、弱可触及,也不是影子可触及,并且它不可复活。不可触及对象已经准备好被回收了。
垃圾收集器在把软引用对象和弱引用对象加入队列的时候,是在它们的引用目标离开相应的可触及状态时;而把影子引用对象加入队列是在引用目标进入相应的状态时!
看两段代码。这两段代码是垃圾回收中的能够说明引用对象使用的最好示例。

代码段1

代码段2

下面我们深入虚拟机中关于内存分配的代码。

首先介绍一下和内存管理有关的数据结构:
堆的数据结构:
typedef struct {
    mspace *msp;//使用dlmalloc分配的内存,分配啊、释放啊就在这个对象上面
    HeapBitmap objectBitmap;//位图信息1表示强可触及、0表示不是强可触及
    size_t absoluteMaxSize;//堆可以增长到的最大值
    size_t bytesAllocated;//已分配的字节数
    size_t objectsAllocated;//已分配的对象数
} Heap;

struct HeapSource {
    size_t argetUtilization;/*下的堆的使用率,取值范围为1..HEAP_UTILIZATION_MAX*/
    size_t minimumSize;//分配的堆的最小大小
    size_t startSize;//堆分配的初始大小
    size_t absoluteMaxSize;//允许分配的对增长到的最大尺寸
    size_t idealSize;//理想的堆的最大大小
    size_t softLimit;//在垃圾收集前允许堆被分配的最大大小
    Heap heaps[HEAP_SOURCE_MAX_HEAP_COUNT];//实实在在的堆
    size_t numHeaps;//当前的堆的个数
    size_t externalBytesAllocated;//不是归堆自己分配的外部分配的大小
    size_t externalLimit;//允许外部分配的最大值
    bool sawZygote;/*在创建这个HeapSource的时候是否是zygote模式也就是是否有zygote进程*/
};

typedef struct {
    unsigned long int *bits;/*由mmap分配的全零初始化过的匿名内存区域*/

    size_t bitsLen;//这个位图的大小
    uintptr_t base;//位图对应的对象指针数组的首地址
    uintptr_t max;/*位图使用中的最后一位被设置的对象指针的地址,如果位图全没设置则max<base*/
} HeapBitmap;

typedef struct {
    /*从上往下增长的栈*/
    const Object **limit;//允许增长到的最低地址
    const Object **top;//栈顶
    const Object **base;//栈底
} GcMarkStack;

typedef struct {
    HeapBitmap bitmaps[HEAP_SOURCE_MAX_HEAP_COUNT];//存放位图的数组
    size_t numBitmaps;//位图数
    GcMarkStack stack;//GC标记栈
    const void *finger;/*only used while scanning/recursing.存放地址上限的一个标记*/
} GcMarkContext;

struct GcHeap {
    HeapSource *heapSource; //堆源数据结构,包含了所有的堆和和堆有关的信息
    HeapRefTable nonCollectableRefs;//存储一些不可被垃圾回收的对象的参考表
    LargeHeapRefTable *finalizableRefs;/*存储一些当被垃圾回收时需要执行finalize方法的参考表*/
    LargeHeapRefTable *pendingFinalizationRefs;/*存储一些需要执行finalize方法的对象的参考表,但是我没有看到在代码里何处往这个表中填入对象*/
    Object *softReferences;//软引用对象的列表
    Object *weakReferences;//弱引用对象的列表
    Object *phantomReferences;//影子引用对象的列表
    LargeHeapRefTable *referenceOperations;/*需要被执行clear或者enqueue方法的引用对象的列表*/
    Object *heapWorkerCurrentObject;
    Method *heapWorkerCurrentMethod;/*如果这两个对象不为空则表示HeapWorker线程正在执行*/
    u8 heapWorkerInterpStartTime;/*如果heapWorkerCurrentObject非空则表示HeapWorker开始执行这个方法的时间*/
    u8 heapWorkerInterpCpuStartTime; /*如果heapWorkerCurrentObject非空则表示HeapWorker CPU开始执行这个方法的时间*/
    struct timespec heapWorkerNextTrim;/*下一次裁剪Heap Source的时间*/
    GcMarkContext markContext;//做标记步骤中的状态
    u8 gcStartTime;//GC开始的时间
    bool gcRunning;//是否正在GC
    enum{SR_COLLECT_NONE,SR_COLLECT_SOME,SR_COLLECT_ALL} softReferenceCollectionState;//GC时软引用对象回收多少,一点都不收集?回收一半?还是全部回收?
    size_t softReferenceHeapSizeThreshold;/*存在多少软引用对象的时候开始回收软引用对象*/
    int softReferenceColor;/*当软引用回收策略为回收一半时使用的概率值*/
    bool markAllReferents;/*如果被设为true则任何软/弱/影子引用对象引用的对象都会被标记,如果设为false则采用普通的引用收集策略*/
    下面都是统计调试跟踪用到的变量不去管它们
#if DVM_TRACK_HEAP_MARKING
    size_t markCount;
    size_t markSize;
#endif
    int ddmHpifWhen;
    int ddmHpsgWhen;
    int ddmHpsgWhat;
    int ddmNhsgWhen;
    int ddmNhsgWhat;
#if WITH_HPROF
    bool hprofDumpOnGc;
    hprof_context_t *hprofContext;
#endif
};

Dalvik virtual machine的与内存管理相关的数据结构关系如图所示。为什么堆只有三个我想是因为初始化的时候生成的是一个堆,如果有Zygote进程则生成的时候又生成一个堆,再从Zygote进程fork新的进程的时候又生成一个堆,之后进程就永远对这个堆也就是被称为是当前堆进行操作,不会再生成新的堆了。

首先看看HeapWorker线程。它主要做执行对象的终结函数和引用对象的清理和归队工作,下面看它的代码。
static void doHeapWork(Thread *self)
{
    Object *obj;
    HeapWorkerOperation op;
    int numFinalizersCalled, numReferencesEnqueued;
    numFinalizersCalled = 0;
    numReferencesEnqueued = 0;
    while ((obj = dvmGetNextHeapWorkerObject(&op)) != NULL) {
        Method *method = NULL;
        /* Make sure the object hasn't been collected since
         * being scheduled.
         */
        /* Call the appropriate method(s).
         */
        if (op == WORKER_FINALIZE) {
            numFinalizersCalled++;
            method = obj->clazz->vtable[gDvm.voffJavaLangObject_finalize];
            callMethod(self, obj, method);
        } else {
            if (op & WORKER_ENQUEUE) {
                numReferencesEnqueued++;
                callMethod(self, obj,
                        gDvm.methJavaLangRefReference_enqueueInternal);
            }
        }
        /* Let the GC collect the object.
         */
        dvmReleaseTrackedAlloc(obj, self);
    }
}

HeapWorker线程的主函数:
static void* heapWorkerThreadStart(void* arg)
{
    Thread *self = dvmThreadSelf();
    int cc;
    /* tell the main thread that we're ready */
    dvmLockMutex(&gDvm.heapWorkerLock);
    gDvm.heapWorkerReady = true;
    cc = pthread_cond_signal(&gDvm.heapWorkerCond);
    dvmUnlockMutex(&gDvm.heapWorkerLock);
    dvmLockMutex(&gDvm.heapWorkerLock);
    while (!gDvm.haltHeapWorker) {
        struct timespec trimtime;
        bool timedwait = false;
        /* We're done running interpreted code for now. */
        dvmChangeStatus(NULL, THREAD_VMWAIT);
        /* Signal anyone who wants to know when we're done. */
        cc = pthread_cond_broadcast(&gDvm.heapWorkerIdleCond);
        /* Trim the heap if we were asked to. */
        trimtime = gDvm.gcHeap->heapWorkerNextTrim;
        if (trimtime.tv_sec != 0 && trimtime.tv_nsec != 0) {
            struct timeval now;
            gettimeofday(&now, NULL);
            if (trimtime.tv_sec < now.tv_sec ||
                (trimtime.tv_sec == now.tv_sec &&
                 trimtime.tv_nsec <= now.tv_usec * 1000))
            {
                size_t madvisedSizes[HEAP_SOURCE_MAX_HEAP_COUNT];
                /* The heap must be locked before the HeapWorker;
                 * unroll and re-order the locks.  dvmLockHeap()
                 * will put us in VMWAIT if necessary.  Once it
                 * returns, there shouldn't be any contention on
                 * heapWorkerLock.
                 */
                dvmUnlockMutex(&gDvm.heapWorkerLock);
                dvmLockHeap();
                dvmLockMutex(&gDvm.heapWorkerLock);
                memset(madvisedSizes, 0, sizeof(madvisedSizes));
                dvmHeapSourceTrim(madvisedSizes, HEAP_SOURCE_MAX_HEAP_COUNT);
                dvmLogMadviseStats(madvisedSizes, HEAP_SOURCE_MAX_HEAP_COUNT);
                dvmUnlockHeap();
                trimtime.tv_sec = 0;
                trimtime.tv_nsec = 0;
                gDvm.gcHeap->heapWorkerNextTrim = trimtime;
            } else {
                timedwait = true;
            }
        }
        /* sleep until signaled */
        if (timedwait) {
            cc = pthread_cond_timedwait(&gDvm.heapWorkerCond,
                    &gDvm.heapWorkerLock, &trimtime);
        } else {
            cc = pthread_cond_wait(&gDvm.heapWorkerCond, &gDvm.heapWorkerLock);
        }
        /* dvmChangeStatus() may block;  don't hold heapWorkerLock.
         */
        dvmUnlockMutex(&gDvm.heapWorkerLock);
        dvmChangeStatus(NULL, THREAD_RUNNING);
        dvmLockMutex(&gDvm.heapWorkerLock);
        /* Process any events in the queue.
         */
        doHeapWork(self);
    }
    dvmUnlockMutex(&gDvm.heapWorkerLock);
    return NULL;
}

下面是显示GC收集的主函数。
void dvmCollectGarbageInternal(bool collectSoftReferences)
{
    GcHeap *gcHeap = gDvm.gcHeap;
    Object *softReferences;
    Object *weakReferences;
    Object *phantomReferences;
    u8 now;
    s8 timeSinceLastGc;
    s8 gcElapsedTime;
    int numFreed;
    size_t sizeFreed;
    /* The heap lock must be held.
     */
    //先初始化一些变量

    if (gcHeap->gcRunning) {
        return;
    }//如果GC已经在运行则退出
    //......设置时间
    dvmSuspendAllThreads(SUSPEND_FOR_GC);//挂起所有的线程
    /* Get the priority (the "nice" value) of the current thread.  The
     * getpriority() call can legitimately return -1, so we have to
     * explicitly test errno.
     */
    //......获取当前线程的优先级
    oldThreadPriority = priorityResult;
    dvmLockMutex(&gDvm.heapWorkerLock);
    /* Make sure that the HeapWorker thread hasn't become
     * wedged inside interp code.  If it has, this call will
     * print a message and abort the VM.
     */
dvmAssertHeapWorkerThreadRunning();

这里插入dvmAssertHeapWorkerThreadRunning的代码看一下。
/* Make sure that the HeapWorker thread hasn't spent an inordinate
* amount of time inside interpreted a finalizer.
*
* Aborts the VM if the thread appears to be wedged.
*
* The caller must hold the heapWorkerLock to guarantee an atomic
* read of the watchdog values.
*/
void dvmAssertHeapWorkerThreadRunning()
{
    if (gDvm.gcHeap->heapWorkerCurrentObject != NULL) {
        static const u8 HEAP_WORKER_WATCHDOG_TIMEOUT = 10*1000*1000LL; // 10sec

        u8 heapWorkerInterpStartTime = gDvm.gcHeap->heapWorkerInterpStartTime;
        u8 now = dvmGetRelativeTimeUsec();
        u8 delta = now - heapWorkerInterpStartTime;

        u8 heapWorkerInterpCpuStartTime =
            gDvm.gcHeap->heapWorkerInterpCpuStartTime;
        u8 nowCpu = dvmGetOtherThreadCpuTimeUsec(gDvm.heapWorkerHandle);
        u8 deltaCpu = nowCpu - heapWorkerInterpCpuStartTime;

        if (delta > HEAP_WORKER_WATCHDOG_TIMEOUT && gDvm.debuggerActive) {
            /*
             * Debugger suspension can block the thread indefinitely.  For
             * best results we should reset this explicitly whenever the
             * HeapWorker thread is resumed.  Ignoring the yelp isn't
             * quite right but will do for a quick fix.
             */
            LOGI("Debugger is attached -- suppressing HeapWorker watchdog/n");
            heapWorkerInterpStartTime = now;        /* reset timer */
        } else if (delta > HEAP_WORKER_WATCHDOG_TIMEOUT) {
            char* desc = dexProtoCopyMethodDescriptor(
                    &gDvm.gcHeap->heapWorkerCurrentMethod->prototype);
            LOGE("HeapWorker is wedged: %lldms spent inside %s.%s%s/n",
                    delta / 1000,
                    gDvm.gcHeap->heapWorkerCurrentObject->clazz->descriptor,
                    gDvm.gcHeap->heapWorkerCurrentMethod->name, desc);
            free(desc);
            dvmDumpAllThreads(true);

            /* abort the VM */
            dvmAbort();
        } else if (delta > HEAP_WORKER_WATCHDOG_TIMEOUT / 2) {
            char* desc = dexProtoCopyMethodDescriptor(
                    &gDvm.gcHeap->heapWorkerCurrentMethod->prototype);
            LOGW("HeapWorker may be wedged: %lldms spent inside %s.%s%s/n",
                    delta / 1000,
                    gDvm.gcHeap->heapWorkerCurrentObject->clazz->descriptor,
                    gDvm.gcHeap->heapWorkerCurrentMethod->name, desc);
            free(desc);
        }
    }
}

继续
    /* Lock the pendingFinalizationRefs list.
     *
     * Acquire the lock after suspending so the finalizer
     * thread can't block in the RUNNING state while
     * we try to suspend.
     */
    dvmLockMutex(&gDvm.heapWorkerListLock);
    /* Set up the marking context.
     */
    dvmHeapBeginMarkStep();
    /* Mark the set of objects that are strongly reachable from the roots.
     */
    dvmHeapMarkRootSet();
    /* dvmHeapScanMarkedObjects() will build the lists of known
     * instances of the Reference classes.
     */
    gcHeap->softReferences = NULL;
    gcHeap->weakReferences = NULL;
    gcHeap->phantomReferences = NULL;
    /* Make sure that we don't hard-mark the referents of Reference
     * objects by default.
     */
    gcHeap->markAllReferents = false;
    /* Don't mark SoftReferences if our caller wants us to collect them.
     * This has to be set before calling dvmHeapScanMarkedObjects().
     */
    if (collectSoftReferences) {
        gcHeap->softReferenceCollectionState = SR_COLLECT_ALL;
    }
    /* Recursively mark any objects that marked objects point to strongly.
     * If we're not collecting soft references, soft-reachable
     * objects will also be marked.
     */
dvmHeapScanMarkedObjects();

这里插入的代码看一下。
void dvmHeapScanMarkedObjects()
{
    GcMarkContext *ctx = &gDvm.gcHeap->markContext;
    /* The bitmaps currently have bits set for the root set.
     * Walk across the bitmaps and scan each object.
     */
#ifndef NDEBUG
    gLastFinger = 0;
#endif
    dvmHeapBitmapWalkList(ctx->bitmaps, ctx->numBitmaps,
            scanBitmapCallback, ctx);
    /* We've walked the mark bitmaps.  Scan anything that's
     * left on the mark stack.
     */
    processMarkStack(ctx);
}

这里插入processMarkStack的代码看一下。
static void processMarkStack(GcMarkContext *ctx)
{
    const Object **const base = ctx->stack.base;
    /* Scan anything that's on the mark stack.
     * We can't use the bitmaps anymore, so use
     * a finger that points past the end of them.
     */
    ctx->finger = (void *)ULONG_MAX;
    while (ctx->stack.top != base) {
        scanObject(*ctx->stack.top++, ctx);
    }
}
其中scanObject是一个复杂的函数,它调用了更为复杂的多个函数,太长太复杂了,在这里不便列出。其实最重要的一点是它之中还是会操作ctx所指向的栈,将新的对象加入栈顶,到下一次回到这个循环的时候,ctx->stack.top实际上又在栈顶了。如此下次实际上就代替深度优先搜索把所有的强可触及对象都给标记了。

继续
    /* Latch these so that the other calls to dvmHeapScanMarkedObjects() don't
     * mess with them.
     */
    softReferences = gcHeap->softReferences;
    weakReferences = gcHeap->weakReferences;
    phantomReferences = gcHeap->phantomReferences;
    /* All strongly-reachable objects have now been marked.
     */
    if (gcHeap->softReferenceCollectionState != SR_COLLECT_NONE) {
        dvmHeapHandleReferences(softReferences, REF_SOFT);
        // markCount always zero
        /* Now that we've tried collecting SoftReferences,
         * fall back to not collecting them.  If the heap
         * grows, we will start collecting again.
         */
        gcHeap->softReferenceCollectionState = SR_COLLECT_NONE;
    } // else dvmHeapScanMarkedObjects() already marked the soft-reachable set
    dvmHeapHandleReferences(weakReferences, REF_WEAK);
    // markCount always zero
    /* Once all weak-reachable objects have been taken
     * care of, any remaining unmarked objects can be finalized.
     */
    dvmHeapScheduleFinalizations();
    /* Any remaining objects that are not pending finalization
     * could be phantom-reachable.  This will mark any phantom-reachable
     * objects, as well as enqueue their references.
     */
dvmHeapHandleReferences(phantomReferences, REF_PHANTOM);
在以上这段代码中其实又多次调用了processMarkStack方法,把引用对象相关的强可触及对象又都标记上了。

    dvmHeapSweepUnmarkedObjects(&numFreed, &sizeFreed);
    dvmHeapFinishMarkStep();
     /* Now's a good time to adjust the heap size, since
     * we know what our utilization is.
     *
     * This doesn't actually resize any memory;
     * it just lets the heap grow more when necessary.
     */
    dvmHeapSourceGrowForUtilization();
    dvmHeapSizeChanged();
    /* Now that we've freed up the GC heap, return any large
     * free chunks back to the system.  They'll get paged back
     * in the next time they're used.  Don't do it immediately,
     * though;  if the process is still allocating a bunch of
     * memory, we'll be taking a ton of page faults that we don't
     * necessarily need to.
     *
     * Cancel any old scheduled trims, and schedule a new one.
     */
    dvmScheduleHeapSourceTrim(5);  // in seconds
    gcHeap->gcRunning = false;
    dvmUnlockMutex(&gDvm.heapWorkerListLock);
    dvmUnlockMutex(&gDvm.heapWorkerLock);
    dvmResumeAllThreads(SUSPEND_FOR_GC);
    //......其它一些事情
}

dvmHeapSweepUnmarkedObjects又是一个巨复杂的函数,其中会调用到的释放堆中的对象的函数是dvmHeapSourceFree,可以看看这个函数。
void dvmHeapSourceFree(void *ptr)
{
    Heap *heap;
    HS_BOILERPLATE();
    heap = ptr2heap(gHs, ptr);
    if (heap != NULL) {
        countFree(heap, ptr, true);
        /* Only free objects that are in the active heap.
         * Touching old heaps would pull pages into this process.
         */
        if (heap == gHs->heaps) {
            mspace_free(heap->msp, ptr);
        }
    }
}
其中的mspace_free函数就是真正的堆空间的释放和压缩,但是这一已经是dlmalloc的部分不是属于虚拟机的内存管理了。

Dalvik virtual Machine的初始化代码里有GC的初始化。
int dvmStartup(int argc, const char* const argv[], bool ignoreUnrecognized,
JNIEnv* pEnv)

    //......
    if (!dvmGcStartup())
    goto fail;
    //......
}

bool dvmGcStartup(void)
{
    dvmInitMutex(&gDvm.gcHeapLock);
    return dvmHeapStartup();
}

bool dvmHeapStartup()
{
    GcHeap *gcHeap;
#if defined(WITH_ALLOC_LIMITS)
    gDvm.checkAllocLimits = false;
    gDvm.allocationLimit = -1;
#endif
    gcHeap = dvmHeapSourceStartup(gDvm.heapSizeStart, gDvm.heapSizeMax);
    if (gcHeap == NULL) {
        return false;
    }
    gcHeap->heapWorkerCurrentObject = NULL;
    gcHeap->heapWorkerCurrentMethod = NULL;
    gcHeap->heapWorkerInterpStartTime = 0LL;
    gcHeap->softReferenceCollectionState = SR_COLLECT_NONE;
    gcHeap->softReferenceHeapSizeThreshold = gDvm.heapSizeStart;
    gcHeap->ddmHpifWhen = 0;
    gcHeap->ddmHpsgWhen = 0;
    gcHeap->ddmHpsgWhat = 0;
    gcHeap->ddmNhsgWhen = 0;
    gcHeap->ddmNhsgWhat = 0;
#if WITH_HPROF
    gcHeap->hprofDumpOnGc = false;
    gcHeap->hprofContext = NULL;
#endif
    /* This needs to be set before we call dvmHeapInitHeapRefTable().
     */
    gDvm.gcHeap = gcHeap;
    /* Set up the table we'll use for ALLOC_NO_GC.
     */
    if (!dvmHeapInitHeapRefTable(&gcHeap->nonCollectableRefs,
                           kNonCollectableRefDefault))
    {
        LOGE_HEAP("Can't allocate GC_NO_ALLOC table/n");
        goto fail;
    }
    /* Set up the lists and lock we'll use for finalizable
     * and reference objects.
     */
    dvmInitMutex(&gDvm.heapWorkerListLock);
    gcHeap->finalizableRefs = NULL;
    gcHeap->pendingFinalizationRefs = NULL;
    gcHeap->referenceOperations = NULL;
    /* Initialize the HeapWorker locks and other state
     * that the GC uses.
     */
    dvmInitializeHeapWorkerState();
    return true;
fail:
    gDvm.gcHeap = NULL;
    dvmHeapSourceShutdown(gcHeap);
    return false;
}

GcHeap *
dvmHeapSourceStartup(size_t startSize, size_t absoluteMaxSize)
{
    GcHeap *gcHeap;
    HeapSource *hs;
    Heap *heap;
    mspace msp;
    assert(gHs == NULL);
    if (startSize > absoluteMaxSize) {
        LOGE("Bad heap parameters (start=%d, max=%d)/n",
           startSize, absoluteMaxSize);
        return NULL;
    }
    /* Create an unlocked dlmalloc mspace to use as
     * the small object heap source.
     */
    msp = createMspace(startSize, absoluteMaxSize, 0);
    if (msp == NULL) {
        return false;
    }
    /* Allocate a descriptor from the heap we just created.
     */
    gcHeap = mspace_malloc(msp, sizeof(*gcHeap));
    if (gcHeap == NULL) {
        LOGE_HEAP("Can't allocate heap descriptor/n");
        goto fail;
    }
    memset(gcHeap, 0, sizeof(*gcHeap));

    hs = mspace_malloc(msp, sizeof(*hs));
    if (hs == NULL) {
        LOGE_HEAP("Can't allocate heap source/n");
        goto fail;
    }
    memset(hs, 0, sizeof(*hs));
    hs->targetUtilization = DEFAULT_HEAP_UTILIZATION;
    hs->minimumSize = 0;
    hs->startSize = startSize;
    hs->absoluteMaxSize = absoluteMaxSize;
    hs->idealSize = startSize;
    hs->softLimit = INT_MAX;    // no soft limit at first
    hs->numHeaps = 0;
    hs->sawZygote = gDvm.zygote;
    if (!addNewHeap(hs, msp, absoluteMaxSize)) {
        LOGE_HEAP("Can't add initial heap/n");
        goto fail;
    }
    gcHeap->heapSource = hs;
    countAllocation(hs2heap(hs), gcHeap, false);
    countAllocation(hs2heap(hs), hs, false);
    gHs = hs;
    return gcHeap;
fail:
    destroy_contiguous_mspace(msp);
    return NULL;
}

完成分配以后,内存结构如图所示。

然后看关闭的过程
void dvmGcShutdown(void)
{
    //TODO: grab and destroy the lock
    dvmHeapShutdown();
}

void dvmHeapShutdown()
{
//TODO: make sure we're locked
    if (gDvm.gcHeap != NULL) {
        GcHeap *gcHeap;
        gcHeap = gDvm.gcHeap;
        gDvm.gcHeap = NULL;
        /* Tables are allocated on the native heap;
         * they need to be cleaned up explicitly.
         * The process may stick around, so we don't
         * want to leak any native memory.
         */
        dvmHeapFreeHeapRefTable(&gcHeap->nonCollectableRefs);
        dvmHeapFreeLargeTable(gcHeap->finalizableRefs);
        gcHeap->finalizableRefs = NULL;
        dvmHeapFreeLargeTable(gcHeap->pendingFinalizationRefs);
        gcHeap->pendingFinalizationRefs = NULL;
        dvmHeapFreeLargeTable(gcHeap->referenceOperations);
        gcHeap->referenceOperations = NULL;
        /* Destroy the heap.  Any outstanding pointers
         * will point to unmapped memory (unless/until
         * someone else maps it).  This frees gcHeap
         * as a side-effect.
         */
        dvmHeapSourceShutdown(gcHeap);
    }
}

void dvmHeapSourceShutdown(GcHeap *gcHeap)
{
    if (gcHeap != NULL && gcHeap->heapSource != NULL) {
        HeapSource *hs;
        size_t numHeaps;
        size_t i;
        hs = gcHeap->heapSource;
        gHs = NULL;
        /* Cache numHeaps because hs will be invalid after the last
         * heap is freed.
         */
        numHeaps = hs->numHeaps;
        for (i = 0; i < numHeaps; i++) {
            Heap *heap = &hs->heaps[i];
            dvmHeapBitmapDelete(&heap->objectBitmap);
            destroy_contiguous_mspace(heap->msp);
        }
        /* The last heap is the original one, which contains the
         * HeapSource object itself.
         */
    }
}
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics