[Gc] list of typos

Ondřej Bílka neleai at seznam.cz
Wed Jul 10 01:53:35 PDT 2013


Hi, 

One part of what stylepp could watch for are typos in comments.

It is more effective to deal with them in bulk than ten commits fixing
one typo at time so here are typos I am relatively sure of.

Usage now is first create list of likely typos, then correct them by
aspell and then generate patch from these typos.

A list that I generated is following (save it to dictionary file.)

         addressible addressable
            alloctor allocator
               amiga Amiga
         descendents descendants
       effectiviness effectiveness
        errorneously erroneously
          eventhough even_though
        exponentialy exponentially
         collectable collectible
          happenning happening
      idiosyncracies idiosyncrasies
      initialisation initialization
            largerly largely
           necesarry necessary
         addressible addressable
             occured occurred
        optimisation optimization
            opyright copyright
             oveflow overflow
           reveresed reversed
          rpresented represented
            spesific specific

then when I ran
stylepp/stylepp_skeleton fix_comment 
it produces following patch:


diff --git a/blacklst.c b/blacklst.c
index f12701e..158b3b3 100644
--- a/blacklst.c
+++ b/blacklst.c
@@ -21,7 +21,7 @@
  * See the definition of page_hash_table in gc_private.h.
  * False hits from the stack(s) are much more dangerous than false hits
  * from elsewhere, since the former can pin a large object that spans the
- * block, eventhough it does not start on the dangerous block.
+ * block, even though it does not start on the dangerous block.
  */
 
 /*
diff --git a/cord/cordbscs.c b/cord/cordbscs.c
index 5685d90..f486b7e 100644
--- a/cord/cordbscs.c
+++ b/cord/cordbscs.c
@@ -570,7 +570,7 @@ int CORD_riter(CORD x, CORD_iter_fn f1, void * client_data)
  * The following functions are concerned with balancing cords.
  * Strategy:
  * Scan the cord from left to right, keeping the cord scanned so far
- * as a forest of balanced trees of exponentialy decreasing length.
+ * as a forest of balanced trees of exponentially decreasing length.
  * When a new subtree needs to be added to the forest, we concatenate all
  * shorter ones to the new tree in the appropriate order, and then insert
  * the result into the forest.
diff --git a/cord/cordxtra.c b/cord/cordxtra.c
index 533ae1b..06b47b5 100644
--- a/cord/cordxtra.c
+++ b/cord/cordxtra.c
@@ -461,7 +461,7 @@ CORD CORD_from_file_eager(FILE * f)
 	c = getc(f);
 	if (c == 0) {
 	  /* Append the right number of NULs    */
-	  /* Note that any string of NULs is rpresented in 4 words, */
+	  /* Note that any string of NULs is represented in 4 words, */
 	  /* independent of its length.                 */
 	    register size_t count = 1;
 
diff --git a/dyn_load.c b/dyn_load.c
index 4ff486a..41a6ff8 100644
--- a/dyn_load.c
+++ b/dyn_load.c
@@ -1116,7 +1116,7 @@ GC_INNER void GC_register_dynamic_libraries(void)
       /* Get info about next shared library */
 	status = shl_get(index, &shl_desc);
 
-      /* Check if this is the end of the list or if some error occured */
+      /* Check if this is the end of the list or if some error occurred */
 	if (status != 0) {
 #        ifdef GC_HPUX_THREADS
 	   /* I've seen errno values of 0.  The man page is not clear   */
diff --git a/extra/AmigaOS.c b/extra/AmigaOS.c
index 2749dd9..b4b21d1 100644
--- a/extra/AmigaOS.c
+++ b/extra/AmigaOS.c
@@ -233,7 +233,7 @@ void *(*GC_amiga_allocwrapper_do)(size_t size,void *(*AllocFunction)(size_t size
    Amiga-spesific routines to obtain memory, and force GC to give
    back fast-mem whenever possible.
 	These hacks makes gc-programs go many times faster when
-   the amiga is low on memory, and are therefore strictly necesarry.
+   the Amiga is low on memory, and are therefore strictly necessary.
 
    -Kjetil S. Matheussen, 2000.
 ******************************************************************/
@@ -437,7 +437,7 @@ void *GC_amiga_rec_alloc(size_t size,void *(*AllocFunction)(size_t size2),const
 void *GC_amiga_allocwrapper_any(size_t size,void *(*AllocFunction)(size_t size2)){
 	void *ret,*ret2;
 
-	GC_amiga_dontalloc=TRUE;	// Pretty tough thing to do, but its indeed necesarry.
+	GC_amiga_dontalloc=TRUE;	// Pretty tough thing to do, but its indeed necessary.
 	latestsize=size;
 
 	ret=(*AllocFunction)(size);
@@ -471,7 +471,7 @@ void *GC_amiga_allocwrapper_any(size_t size,void *(*AllocFunction)(size_t size2)
 #ifdef GC_AMIGA_RETRY
 		else{
 			/* We got chip-mem. Better try again and again and again etc., we might get fast-mem sooner or later... */
-			/* Using gctest to check the effectiviness of doing this, does seldom give a very good result. */
+			/* Using gctest to check the effectiveness of doing this, does seldom give a very good result. */
 			/* However, real programs doesn't normally rapidly allocate and deallocate. */
 //			printf("trying to force... %d bytes... ",size);
 			if(
diff --git a/finalize.c b/finalize.c
index 1e07c47..65b694a 100644
--- a/finalize.c
+++ b/finalize.c
@@ -20,7 +20,7 @@
 
 /* Type of mark procedure used for marking from finalizable object.     */
 /* This procedure normally does not mark the object, only its           */
-/* descendents.                                                         */
+/* descendants.                                                         */
 typedef void (* finalization_mark_proc)(ptr_t /* finalizable_obj_ptr */);
 
 #define HASH3(addr,size,log_size) \
diff --git a/include/cord.h b/include/cord.h
index cb41de8..0e52f84 100644
--- a/include/cord.h
+++ b/include/cord.h
@@ -326,7 +326,7 @@ CORD_API size_t CORD_rchr(CORD x, size_t i, int c);
 /*    the correct buffer size.                                          */
 /* 4. Most of the conversions are implement through the native          */
 /*    vsprintf.  Hence they are usually no faster, and                  */
-/*    idiosyncracies of the native printf are preserved.  However,      */
+/*    idiosyncrasies of the native printf are preserved.  However,      */
 /*    CORD arguments to CORD_sprintf and CORD_vsprintf are NOT copied;  */
 /*    the result shares the original structure.  This may make them     */
 /*    very efficient in some unusual applications.                      */
diff --git a/include/gc.h b/include/gc.h
index 8b69b94..8c367df 100644
--- a/include/gc.h
+++ b/include/gc.h
@@ -401,8 +401,8 @@ GC_API void GC_CALL GC_init(void);
 /* new object is cleared.  GC_malloc_stubborn promises that no changes  */
 /* to the object will occur after GC_end_stubborn_change has been       */
 /* called on the result of GC_malloc_stubborn.  GC_malloc_uncollectable */
-/* allocates an object that is scanned for pointers to collectable      */
-/* objects, but is not itself collectable.  The object is scanned even  */
+/* allocates an object that is scanned for pointers to collectible      */
+/* objects, but is not itself collectible.  The object is scanned even  */
 /* if it does not appear to be reachable.  GC_malloc_uncollectable and  */
 /* GC_free called on the resulting object implicitly update             */
 /* GC_non_gc_bytes appropriately.                                       */
@@ -954,7 +954,7 @@ GC_API void GC_CALL GC_debug_register_finalizer(void * /* obj */,
 	/* allocated by GC_malloc or friends. Obj may also be   */
 	/* NULL or point to something outside GC heap (in this  */
 	/* case, fn is ignored, *ofn and *ocd are set to NULL). */
-	/* Note that any garbage collectable object referenced  */
+	/* Note that any garbage collectible object referenced  */
 	/* by cd will be considered accessible until the        */
 	/* finalizer is invoked.                                */
 
diff --git a/include/gc_allocator.h b/include/gc_allocator.h
index 9f38180..9152a5d 100644
--- a/include/gc_allocator.h
+++ b/include/gc_allocator.h
@@ -27,9 +27,9 @@
  * the garbage collector.  Gc_alloctor<T> allocates garbage-collectable
  * objects of type T.  Traceable_allocator<T> allocates objects that
  * are not themselves garbage collected, but are scanned by the
- * collector for pointers to collectable objects.  Traceable_alloc
+ * collector for pointers to collectible objects.  Traceable_alloc
  * should be used for explicitly managed STL containers that may
- * point to collectable objects.
+ * point to collectible objects.
  *
  * This code was derived from an earlier version of the GNU C++ standard
  * library, which itself was derived from the SGI STL implementation.
diff --git a/include/gc_cpp.h b/include/gc_cpp.h
index 40fe729..d815e77 100644
--- a/include/gc_cpp.h
+++ b/include/gc_cpp.h
@@ -27,16 +27,16 @@ Garbage Collection for C++", by John R. Elis and David L. Detlefs
 All heap-allocated objects are either "collectable" or
 "uncollectable".  Programs must explicitly delete uncollectable
 objects, whereas the garbage collector will automatically delete
-collectable objects when it discovers them to be inaccessible.
+collectible objects when it discovers them to be inaccessible.
 Collectable objects may freely point at uncollectable objects and vice
 versa.
 
 Objects allocated with the built-in "::operator new" are uncollectable.
 
-Objects derived from class "gc" are collectable.  For example:
+Objects derived from class "gc" are collectible.  For example:
 
     class A: public gc {...};
-    A* a = new A;       // a is collectable.
+    A* a = new A;       // a is collectible.
 
 Collectable instances of non-class types can be allocated using the GC
 (or UseGC) placement:
@@ -50,17 +50,17 @@ using the NoGC placement:
     class A: public gc {...};
     A* a = new (NoGC) A;   // a is uncollectable.
 
-The new(PointerFreeGC) syntax allows the allocation of collectable
+The new(PointerFreeGC) syntax allows the allocation of collectible
 objects that are not scanned by the collector.  This useful if you
 are allocating compressed data, bitmaps, or network packets.  (In
 the latter case, it may remove danger of unfriendly network packets
 intentionally containing values that cause spurious memory retention.)
 
-Both uncollectable and collectable objects can be explicitly deleted
+Both uncollectable and collectible objects can be explicitly deleted
 with "delete", which invokes an object's destructors and frees its
 storage immediately.
 
-A collectable object may have a clean-up function, which will be
+A collectible object may have a clean-up function, which will be
 invoked when the collector discovers the object to be inaccessible.
 An object derived from "gc_cleanup" or containing a member derived
 from "gc_cleanup" has a default clean-up function that invokes the
@@ -79,7 +79,7 @@ B, B is considered accessible.  After A's clean-up is invoked and its
 storage released, B will then become inaccessible and will have its
 clean-up invoked.  If A points at B and B points to A, forming a
 cycle, then that's considered a storage leak, and neither will be
-collectable.  See the interface gc.h for low-level facilities for
+collectible.  See the interface gc.h for low-level facilities for
 handling such cycles of objects with clean-up.
 
 The collector cannot guarantee that it will find all inaccessible
@@ -96,14 +96,14 @@ add -DGC_OPERATOR_NEW_ARRAY to the Makefile.
 
 If your compiler doesn't support "operator new[]", beware that an
 array of type T, where T is derived from "gc", may or may not be
-allocated as a collectable object (it depends on the compiler).  Use
-the explicit GC placement to make the array collectable.  For example:
+allocated as a collectible object (it depends on the compiler).  Use
+the explicit GC placement to make the array collectible.  For example:
 
     class A: public gc {...};
-    A* a1 = new A[ 10 ];        // collectable or uncollectable?
-    A* a2 = new (GC) A[ 10 ];   // collectable
+    A* a1 = new A[ 10 ];        // collectible or uncollectable?
+    A* a2 = new (GC) A[ 10 ];   // collectible
 
-3. The destructors of collectable arrays of objects derived from
+3. The destructors of collectible arrays of objects derived from
 "gc_cleanup" will not be invoked properly.  For example:
 
     class A: public gc_cleanup {...};
@@ -250,10 +250,10 @@ inline void* operator new( size_t size, GC_NS_QUALIFY(GCPlacement) gcp,
 			  GC_NS_QUALIFY(GCCleanUpFunc) cleanup = 0,
 			  void* clientData = 0 );
     /*
-    Allocates a collectable or uncollected object, according to the
+    Allocates a collectible or uncollected object, according to the
     value of "gcp".
 
-    For collectable objects, if "cleanup" is non-null, then when the
+    For collectible objects, if "cleanup" is non-null, then when the
     allocated object "obj" becomes inaccessible, the collector will
     invoke the function "cleanup( obj, clientData )" but will not
     invoke the object's destructors.  It is an error to explicitly
diff --git a/mach_dep.c b/mach_dep.c
index d910939..c1c3162 100644
--- a/mach_dep.c
+++ b/mach_dep.c
@@ -105,7 +105,7 @@
 
 # if defined(M68K) && defined(AMIGA)
     /* This function is not static because it could also be             */
-    /* errorneously defined in .S file, so this error would be caught   */
+    /* erroneously defined in .S file, so this error would be caught   */
     /* by the linker.                                                   */
     void GC_push_regs(void)
     {
diff --git a/malloc.c b/malloc.c
index 6e18da2..0eba74c 100644
--- a/malloc.c
+++ b/malloc.c
@@ -288,7 +288,7 @@ GC_API void * GC_CALL GC_generic_malloc(size_t lb, int k)
    }
 }
 
-/* Allocate lb bytes of pointerful, traced, but not collectable data */
+/* Allocate lb bytes of pointerful, traced, but not collectible data */
 GC_API void * GC_CALL GC_malloc_uncollectable(size_t lb)
 {
     void *op;
@@ -366,7 +366,7 @@ void * malloc(size_t lb)
     /* to at most a jump instruction in this case.                      */
 #   if defined(I386) && defined(GC_SOLARIS_THREADS)
       /*
-       * Thread initialisation can call malloc before
+       * Thread initialization can call malloc before
        * we're ready for it.
        * It's not clear that this is enough to help matters.
        * The thread implementation may well call malloc at other
@@ -417,7 +417,7 @@ void * calloc(size_t n, size_t lb)
       return NULL;
 #   if defined(GC_LINUX_THREADS) /* && !defined(USE_PROC_FOR_LIBRARIES) */
 	/* libpthread allocated some memory that is only pointed to by  */
-	/* mmapped thread stacks.  Make sure it's not collectable.      */
+	/* mmapped thread stacks.  Make sure it's not collectible.      */
 	{
 	  static GC_bool lib_bounds_set = FALSE;
 	  ptr_t caller = (ptr_t)__builtin_return_address(0);
diff --git a/mallocx.c b/mallocx.c
index c768222..70f16bd 100644
--- a/mallocx.c
+++ b/mallocx.c
@@ -480,7 +480,7 @@ GC_API void * GC_CALL GC_memalign(size_t align, size_t lb)
     return result;
 }
 
-/* This one exists largerly to redirect posix_memalign for leaks finding. */
+/* This one exists largely to redirect posix_memalign for leaks finding. */
 GC_API int GC_CALL GC_posix_memalign(void **memptr, size_t align, size_t lb)
 {
   /* Check alignment properly.  */
diff --git a/mark.c b/mark.c
index 67d4fb6..e6d18ef 100644
--- a/mark.c
+++ b/mark.c
@@ -130,7 +130,7 @@ GC_INNER GC_bool GC_mark_stack_too_small = FALSE;
 static struct hblk * scan_ptr;
 
 STATIC GC_bool GC_objects_are_marked = FALSE;
-		/* Are there collectable marked objects in the heap?    */
+		/* Are there collectible marked objects in the heap?    */
 
 /* Is a collection in progress?  Note that this can return true in the  */
 /* nonincremental case, if a collection has been abandoned and the      */
@@ -991,7 +991,7 @@ STATIC void GC_do_local_mark(mse *local_mark_stack, mse *local_top)
 	    /* Try to share the load, since the main stack is empty,    */
 	    /* and helper threads are waiting for a refill.             */
 	    /* The entries near the bottom of the stack are likely      */
-	    /* to require more work.  Thus we return those, eventhough  */
+	    /* to require more work.  Thus we return those, even though  */
 	    /* it's harder.                                             */
 	    mse * new_bottom = local_mark_stack
 				+ (local_top - local_mark_stack)/2;
diff --git a/misc.c b/misc.c
index bff27a9..856b74f 100644
--- a/misc.c
+++ b/misc.c
@@ -323,7 +323,7 @@ GC_INNER void GC_extend_size_map(size_t i)
   void *GC_clear_stack_inner(void *, ptr_t);
 #else
   /* Clear the stack up to about limit.  Return arg.  This function is  */
-  /* not static because it could also be errorneously defined in .S     */
+  /* not static because it could also be erroneously defined in .S     */
   /* file, so this error would be caught by the linker.                 */
   void * GC_clear_stack_inner(void *arg, ptr_t limit)
   {
diff --git a/os_dep.c b/os_dep.c
index 4d56ab7..df65ad4 100644
--- a/os_dep.c
+++ b/os_dep.c
@@ -3431,7 +3431,7 @@ STATIC void GC_protect_heap(void)
 }
 
 /* We assume that either the world is stopped or its OK to lose dirty   */
-/* bits while this is happenning (as in GC_enable_incremental).         */
+/* bits while this is happening (as in GC_enable_incremental).         */
 GC_INNER void GC_read_dirty(void)
 {
 #   if defined(GWW_VDB)
diff --git a/tests/test.c b/tests/test.c
index af72e70..b840849 100644
--- a/tests/test.c
+++ b/tests/test.c
@@ -623,7 +623,7 @@ void *GC_CALLBACK reverse_test_inner(void *data)
       h[1999] = gcj_ints(1,200);
       for (i = 0; i < 51; ++i)
 	h[1999] = gcj_reverse(h[1999]);
-      /* Leave it as the reveresed list for now. */
+      /* Leave it as the reversed list for now. */
 #   else
       h[1999] = ints(1,200);
 #   endif
diff --git a/tests/test_cpp.cc b/tests/test_cpp.cc
index 76eb47b..bca9fe3 100644
--- a/tests/test_cpp.cc
+++ b/tests/test_cpp.cc
@@ -86,7 +86,7 @@ class A {public:
 
 
 class B: public GC_NS_QUALIFY(gc), public A { public:
-    /* A collectable class. */
+    /* A collectible class. */
 
     B( int j ): A( j ) {}
     ~B() {
@@ -99,7 +99,7 @@ int B::deleting = 0;
 
 
 class C: public GC_NS_QUALIFY(gc_cleanup), public A { public:
-    /* A collectable class with cleanup and virtual multiple inheritance. */
+    /* A collectible class with cleanup and virtual multiple inheritance. */
 
     C( int levelArg ): A( levelArg ), level( levelArg ) {
 	nAllocated++;
@@ -130,7 +130,7 @@ int C::nAllocated = 0;
 
 
 class D: public GC_NS_QUALIFY(gc) { public:
-    /* A collectable class with a static member function to be used as
+    /* A collectible class with a static member function to be used as
     an explicit clean-up function supplied to ::new. */
 
     D( int iArg ): i( iArg ) {
@@ -151,7 +151,7 @@ int D::nAllocated = 0;
 
 
 class E: public GC_NS_QUALIFY(gc_cleanup) { public:
-    /* A collectable class with clean-up for use by F. */
+    /* A collectible class with clean-up for use by F. */
 
     E() {
 	nAllocated++;}
@@ -166,7 +166,7 @@ int E::nAllocated = 0;
 
 
 class F: public E {public:
-    /* A collectable class with clean-up, a base with clean-up, and a
+    /* A collectible class with clean-up, a base with clean-up, and a
     member with clean-up. */
 
     F() {
@@ -264,7 +264,7 @@ int APIENTRY WinMain( HINSTANCE instance ATTR_UNUSED,
 	    (void)f;
 	    if (0 == i % 10) delete c;}
 
-	    /* Allocate a very large number of collectable As and Bs and
+	    /* Allocate a very large number of collectible As and Bs and
 	    drop the references to them immediately, forcing many
 	    collections. */
 	for (i = 0; i < 1000000; i++) {
diff --git a/typd_mlc.c b/typd_mlc.c
index 03253b9..e7e57cb 100644
--- a/typd_mlc.c
+++ b/typd_mlc.c
@@ -1,6 +1,6 @@
 /*
  * Copyright (c) 1991-1994 by Xerox Corporation.  All rights reserved.
- * opyright (c) 1999-2000 by Hewlett-Packard Company.  All rights reserved.
+ * copyright (c) 1999-2000 by Hewlett-Packard Company.  All rights reserved.
  *
  * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
  * OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
diff --git a/win32_threads.c b/win32_threads.c
index 96d14e8..1f21ca6 100644
--- a/win32_threads.c
+++ b/win32_threads.c
@@ -1367,7 +1367,7 @@ STATIC word GC_push_stack_for(GC_thread thread, DWORD me)
 
     /* Push all registers that might point into the heap.  Frame        */
     /* pointer registers are included in case client code was           */
-    /* compiled with the 'omit frame pointer' optimisation.             */
+    /* compiled with the 'omit frame pointer' optimization.             */
 #   define PUSH1(reg) GC_push_one((word)context.reg)
 #   define PUSH2(r1,r2) (PUSH1(r1), PUSH1(r2))
 #   define PUSH4(r1,r2,r3,r4) (PUSH2(r1,r2), PUSH2(r3,r4))


More information about the Gc mailing list