Abstract Algebra Notes

“(…) 1000 years ago negative numbers were considered to be an outrageous idea. After all, it was said, numbers are for counting: we may have one orange, or two oranges, or no oranges at all; but how can we have minus an orange? The logisticians, or professional calculators, of those days used negative numbers as an aid in their computations; they considered these numbers to be a useful fiction, for if you believe in them then every linear equation ax + b = 0 has a solution (namely x = −b/a, provided a ≠ 0). Even the great Diophantus once described the solution of 4x + 6 = 2 as an absurd number. The idea of a numeration which included negative numbers was far too abstract for many of the learned heads of the tenth century.” (Pinter, Charles C.. A Book of Abstract Algebra: Second Edition. Dover Reprints. Kindle Edition, pg. 12)

Today negative numbers are handled by 10-year old children, proved to be a very useful idea, yet their nature is as abstract as 1000 years ago.

These are notes from my own readings on algebra from the referred sources.

The well-known sets like Reals, Integers, Rational and Complex numbers will be denoted by bold uppercase letters: R, Z, Q, C… and others sets are denoted by italic or plain, uppercase letters (G, R, S, T...)

About Abstract Algebra

Abstract Algebra is a discipline that describes mathematical systems and their properties through a hierarchical and generalized (abstract) approach.

1. Some preliminaries

Below are some mandatory concepts on set theory, functions and number theory.

1.1. Relations and equivalence classes

“Mathematicians do not study objects, but the relations between objects; to them it is a matter of indifference if these objects are replaced by others, provided that the relations do not change. Matter does not engage their attention; they are interested in form alone.” (Poincaré)

Definition 1.1. Let S and T be sets. Then a relation from S to T is a subset R of S × T. If s ∈ S and t ∈ T, then we write sRt if (s, t) ∈ R. In particular, a relation on S is a relation from S to S.

Example. Let S = {1, 2, 3} and T = {1, 2, 3, 4}. Define a relation ρ from S to T via sρt if and only if st2 ≤ 4. Then ρ = {(1, 1), (1, 2), (2, 1), (3, 1)}.  

Definition 1.2. Let R be a relation on S. We say that R is reflexive if (a, a) ∈ ρ for all a ∈ S.

Definition 1.3. A relation R on a set S is symmetric if (a, b) ∈ R implies (b, a) ∈ R for all a, b ∈ S.

Definition 1.4. Let R be a relation on a set S. We say that R is transitive if, whenever (a, b) ∈ R, and (b, c) ∈ R we also have (a, c) ∈ R, for all a, b, c ∈ S.

Equivalence relations are of particular interest:

Many times in mathematics we run into the following kind of situation. We have a set S and we wish to identify certain elements of S with each other, i.e., to regard certain elements as being “essentially the same” even though they are different elements. This comes about when we are considering some relationship that may or may not hold between two elements of S, and we wish to “lump together” any two elements between which the relationship holds. For example, if we were considering the set of all triangles in the plane, we might want to regard as “the same” any two triangles that were congruent to each other. Not every relation R on S is suitable for use in performing identifications. For example, we certainly want to identify any element s with itself, so if we aim to identify s1 with s2 if and only if s1Rs2, then we want R to have the property that sRs, for every s. Similarly, if we are going to identify s1 with s2 then we want to identify s2 with s1. So we want R to have the property that s1Rs2 implies s2Rs1. Finally, if we identify s1 and s2 and also s2 and s3 then we want to identify s1 and s3. (Saracino, Dan. Abstract Algebra: A First Course (p. 80). Waveland Pr Inc. Kindle Edition.)

Definition 1.5. An equivalence relation on a set S is a relation that is reflexive, symmetric and transitive. The the symbol denotes an equivalence relation.

Example. Let S = Z × (Z\{0}). Define ∼ on S via (a, b) ∼ (c, d) if and only  if ad = bc. We must verify that ∼ is an equivalence relation. 

Reflexivity: As ab =  ba, we have (a, b) ∼ (a, b) for all integers a and nonzero integers b. 

Symmetry:  Suppose that(a, b) ∼ (c, d). Then ad = bc, and this also tells us that cb=ba, or (c, d) ∼ (a, b).  

Transitivity: Let (a, b) ∼ (c, d) and (c, d) ∼ (e, f). Then ad = bc and cf = de.  Thus, adf = bcf = bde. Since we are assuming that d /= 0, this means that af =  be. Therefore, (a, b) ∼ (e, f). 

Definition 1.6. Let ∼ be an equivalence relation on a set S. If a ∈ S, then the equivalence class of a, denoted [a] is the set {b ∈ S: a ∼ b}.

Definition 1.7. Let S be a set, and let T be a set of nonempty subsets of S. We say that T is a partition of S if every a ∈ S lies in exactly one set in T.

Example of a partition. Let S = {1, 2, 3, 4, 5, 6, 7} and T = {{1, 3, 4, 6},{2, 7},{5}}. Then T is a partition of S.

Theorem 1.1. Let S be a set, and ∼ an equivalence relation on S. Then the equivalence classes with respect to ∼ form a partition of S. In particular, if a ∈ S, then  a ∈ [a] and, furthermore, a ∈ [b] if and only if [a]=[b].  
Proof. As ∼ is reflexive, a ∼ a, and hence a ∈ [a] for every a ∈ S. In particular,  the equivalence classes are not empty, and every element of S is in at least one  of them. Suppose that d ∈ [a]∩[c]. We must show that [a]=[c]. If e ∈ [a], then  a ∼ e. Also, d ∈ [a] means that a ∼ d, and hence d ∼ a by symmetry. Also, c ∼ d.  By transitivity, c ∼ a, and then c ∼ e. Thus, e ∈ [c], and therefore [a]⊆[c]. By  the same argument, [c]⊆[a], and hence [a]=[c]. Thus, the equivalence classes do indeed form a partition. To prove the final statement of the theorem, note that if  a ∈ [a]∩[b], then [a]=[b] and, conversely, if [a]=[b], then a ∈ [a]=[b].(q.e.d) 
(Lee, Gregory T.. Abstract Algebra (Springer Undergraduate Mathematics Series) (p. 8). Springer International Publishing. Kindle Edition.)

Corollary. Let R be an equivalence relation on S. Then for every s1, s2  S, (s1, s2) R  (or s1Rs2) iff [s1]=[s2].

Example. On Z, let us say that a ∼ b if and only if a + b is even. We claim that  ∼ is an equivalence relation. If a ∈ Z, then a + a is certainly even, so a ∼ a, and ∼  is reflexive. If a ∼ b, then a + b is even. But this also means that b + a is even, and  hence b ∼ a. Thus, ∼ is symmetric. Finally, suppose that a ∼ b and b ∼ c. Then  a + b and b + c are both even. This means that their sum, a + 2b + c, is even. As  2b is even, we see that a + c is even, and hence a ∼ c. That is, ∼ is transitive. 

Given the relation above, let's look on its equivalence classes.
We know a ~ 0, iff a is even. Thus:
[0] = {..., -6, -4, -2, 0, 2, 4, 6...}
a ~ 1, iff a+1 is even, which means a is odd:
[1] = {..., -5, -3. -1, 1, 3, 5, 7...}
a ~ 2, iff a+2, is even, thus
[2] = {...,-6, -4, -2, 0, 2, 4, 6 ..}, thus [0] = [2]
a ~ 3, iff a+3 is even, thus
[3] = {... -5, -3, -1, 1, 3, 5, 7..}, thus [1] = [3]
... and we could go on, and on...

From the example above we note that equivalence classes break the set down into subsets having no elements in common (e.,g., [0] has no element in common with [1]). But unless there is only one element in an equivalence class, the representative chosen for that class is not unique. That is, if b ∈ [a], by Definition 1.6 we might write [b] instead of [a], like in the example above, [6] = [-2], [3] = [7], and so forth.

1.2. Functions

Formally, if S and T are sets, then a function from S to T is a relation ρ from S to T such that, for each s ∈ S, there is exactly one t ∈ T such that sρt.

Example. Let S be the set of all English words and T the set of letters in the  alphabet. We can define α: S → T by letting α(w) be the first letter of the word w,  for every w ∈ S. 

Definition 1.8. A function α: S → T is one-to-one (or injective) if α(s1) = α(s2) implies s1 = s2, for all s1,s2 ∈ S.

Definition 1.9. A function α: S → T is bijective if it is one-to-one and onto.

Definition 1.10. Let R, S and T be sets, and let α: R → S and β: S → T be functions. Then the composition, β ◦ α, or simply βα, is the function from R to T given by (βα)(r) = β(α(r)) for all r ∈ R. I like to think on a composition as relating two functions by making an image of a given function to be the domain of another one. If it happens to be a map, we can map into even more.

Theorem 1.2. Let α: R S, β: S → T and γ: T → U be functions. Then,
1. (γβ)α = γ (βα);  
2. if α and β are one-to-one, then so is βα;
3. if α and β are onto, then so is βα; and  
4. if α and β are bijective, then so is βα. 


Proof. 
(1) Take any r ∈ R. Then ((γβ)α)(r) = (γβ)(α(r)) = γ (β(α(r))). Similarly, (γ (βα))(r) = γ ((βα)(r)) = γ (β(α(r))).  
(2) Suppose that (βα)(r1) = (βα)(r2) for some r1,r2 ∈ R. Then β(α(r1)) =  β(α(r2)). Since β is one-to-one, α(r1) = α(r2). Since α is one-to-one, r1 = r2.  
(3) Take any t ∈ T. Since β is onto, there exists an s ∈ S such that β(s) = t. But α  is also onto, so there exists an r ∈ R such that α(r) = s. Thus, (βα)(r) = β(α(r)) =  β(s) = t.  
(4) Combine (2) and (3).
(c.q.d.)


Theorem 1.3. Let α: S → T be a bijective function. Then there exists a bijective  function β: T → S such that (βα)(s) = s for all s ∈ S and (αβ)(t) = t for all  t ∈ T.  

Proof. Since α is bijective, for any t ∈ T, there is a unique s ∈ S such that α(s) = t.  
Define β: T → S via β(t) = s. By definition, we have (βα)(s) = β(α(s)) = s, for  all s ∈ S. 
Also, if t ∈ T, then choosing s such that α(s) = t, we have β(t) = s, and  therefore (αβ)(t) = α(β(t)) = α(s) = t, as required. 
It remains to show that β is  bijective. But if β(t1) = β(t2), then  t1 = (αβ)(t1) = α(β(t1)) = α(β(t2)) = (αβ)(t2) = t2, so β is one-to-one. Furthermore, if s ∈ S, then β(α(s)) = (βα)(s) = s, and hence  β is onto.
(c.q.d.)

Definition 1.11. A permutation of a set S is a bijective function from S to S.

Definition 1.12. Let S be a set. Then a binary operation on S is a function from S × S to S.

1.3. Prime factorization

Definition 1.10. A natural number p > 1 is said to be prime if its only positive divisors are 1 and p. Otherwise, it is composite. Note that 1 is neither prime nor composite.

Theorem 1.4. (Euclid’s Lemma). Let p > 1 be a positive integer. Then the following are equivalent: 1. p is prime; and 2. if a and b are integers such that p|ab, then p|a or p|b. (p|ab is read as p divides ab, or ab is a multiple of p)

Theorem 1.5. (Fundamental Theorem of Arithmetic). If a ∈ N and a > 1, then there exist primes p1,…, pn (not necessarily distinct) such that a = p1 p2 ··· pn. Furthermore, this product is unique up to order. That is, if a = q1q2 ··· qm, for some primes qi, then m = n and, after rearranging the primes, pi = qi for all i.

Lemma. (Division Algorithm) If a and n are integers and n is positive, then there exist unique integers q and r such that a = qn + r and 0 < r < n (q stands for quotient and r stands for remainder).

Proposition: a ~ b <-> a and b have the same remainder modulo m <-> m | (a-b)

Bézout’s Identity: Let a and b be integers or polynomials such that gcd(a,b)=d. Then there exist integers or polynomials x and y such that ax + by = d.

1.4 Induction

The Well Ordering Principle says that every nonempty subset of natural numbers has a least element. For example, the least element of N itself is 0.

Theorem.1.6 (The Principle of Mathematical Induction): Let 𝑆 be a set of natural numbers such that (i) 0 ∈ 𝑆 and (ii) for all 𝑘 ∈ N, 𝑘 ∈ 𝑆 → 𝑘 + 1 ∈ 𝑆. Then 𝑆 = N.

Proof*.
Let 𝑆 be a set of natural numbers such that 0 ∈ 𝑆 (condition (i)), and such that  whenever 𝑘 ∈ 𝑆, 𝑘 + 1 ∈ 𝑆 (condition (ii)). Assume toward contradiction that 𝑆 ≠ N. Let  𝐴 = {𝑘 ∈ N| 𝑘 ∉ 𝑆} (so, 𝐴 is the set of natural numbers not in 𝑆). Since 𝑆 ≠ N, 𝐴 is nonempty. So, by  the Well Ordering Principle, 𝐴 has a least element, let’s call it 𝑎. 𝑎 ≠ 0 because 0 ∈ 𝑆 and 𝑎 ∉ 𝑆. So,  𝑎 − 1 ∈ N. Letting 𝑘 = 𝑎 − 1, we have 𝑎 − 1 ∈ 𝑆 → 𝑘 ∈ 𝑆 → 𝑘 + 1 ∈ 𝑆 → (𝑎 − 1) + 1 ∈ 𝑆 → 𝑎 ∈ 𝑆.  But 𝑎 ∈ 𝐴, which means that 𝑎 ∉ 𝑆. This is a contradiction, and so, 𝑆 = N.

*Warner, Steve. Pure Mathematics for Beginners: A Rigorous Introduction to Logic, Set Theory, Abstract Algebra, Number Theory, Real Analysis, Topology, Complex Analysis, and Linear Algebra (p. 43). Get 800. Kindle Edition. 

Restating: Let P(n) be a statement and suppose that (i) P(0) is true and (ii) for all 𝑘 ∈ N, P(k) → P(k + 1). Then P(n) is true for all n ∈ N.

2. Algebraic structures: Semigroups, Monoids, Groups, Rings and Fields

A mathematical set is simply a collection of objects. These objects might have something in common, but that is not necessary—a set need not have special properties. Algebraic structures, on the other hand, does have special properties.

Definition. A mathematical system S is a set S = {E, O, A} where E is a nonempty set of elements, O is a set of relations and operations on E, and A is a set of axioms concerning the elements of E and O. The elements of E are called the elements of the system. [Deskins, W. E.. Abstract Algebra (Dover Books on Mathematics) (p. 24). Dover Publications. Kindle Edition.]

Groups

Historically, of course, the examples preceded the abstract definition; the abstract concept of “group” came into being only when people realized that many objects they were working with had various structural characteristics in common, and got the idea that some economy of effort could be achieved by studying these common features abstractly rather than over and over again in each separate case. In discussing the development of the subject, E. T. Bell has remarked that “wherever groups disclosed themselves, or could be introduced, simplicity crystallized out of comparative chaos.” (Saracino, Dan. Abstract Algebra: A First Course (p. 17). Waveland Pr Inc. Kindle Edition.)

Before Groups, there are other two simpler algebraic structures called semigroups and monoids.

A semigroup is a pair (S, *) where S is a set and * is an associative binary operation.

A monoid is a pair (S, *), where S is a semigroup with an identity element e, for all a ∈ S we have a*e = e*a = a.

Comments and Examples

  1. Loosely, a binary operation is a function f: S2 S. Normally we would write f(s1, s2) S. Abstract Algebra texts use what is called the infix notation: s1* s2 S, in which * is a binary operation.
  2. The set of even integers under multiplication, (2Z, ·) is a semigroup. A multiplication of two even integers returns an even integer, and it is associative. The same holds for addition. But not a monoid, since it lacks the identity element 1.
  3. (N,+) is a semigroup, but not a monoid because it lacks the identity 0.
  4. For any n ∈ Z, (nZ, +) is a monoid with identity 0.
  5. The set of 2Z+1, the odd integers, is a semigroup under multiplication. It is also a monoid because the identity element 1 {… ,-3,-1, 1, 3, 5, 7, … }. It is not a semigroup under addition, because adding two odd integers returns an even integer, also it has no identity 0.
  6. The pair (2Z, ·) is not a monoid since it lacks the identity 1.
  7. The semigroup (Z, *) where * is defined as a*b = min{a, b} is not a monoid. To see this suppose there is an identity e. Then e*(e+1)=e+1. But by definition of *, e*(e+1)=e. Therefore, by contradiction, we showed (Z, *) has no identity.
  8. Let S = {u, v, w}. Now define the following operation table:
* | u v w
=========
u | v w w
v | w u u
w | u v v

Notice that (u * v) * w = w * w = v = u*(v*w) = u * u = v. However, (u*w)*v = w*v = v, and u*(w*v) = u*v = w, so it is not associative, therefore not a semigroup.

Now, a group is a monoid in which every element is invertible:

Definition 2.1: A group is a set G with a binary operation ∗ such that:

i) (Closure) ∀ g1, g2 ∈ G, g1 ∗ g2 ∈ G;

ii) (Associativity) ∀ g1, g2, g3 ∈ G, (g1 ∗ g2) ∗ g3 = g1 ∗ (g2 ∗ g3);

iii) (Identity) ∃ e ∈ G such that ∀ g ∈ G, e ∗ g = g ∗ e = g;

iv) (Inverses) ∀ g ∈ G, ∃ g ′ ∈ G such that g ∗ g ′ = g ′ ∗ g = e.

(Alcock, Lara. How to Think About Abstract Algebra (p. 106). OUP Oxford. Kindle Edition.)

Comments and Examples

Note that: Closure has been needless pointed out, since a binary operation is always closed (maps elements of a set to the same set). Also Commutativity is not an axiom. When a group is commutative, it is said to be an Abelian group.

Examples of groups: the integers under addition; the integer multiples of 3 under addition; the integers modulo 12 under addition modulo 12; continuous real functions under function addition, the set A={cow, dog, cat} under permutation.

Structures that are not groups: integers under division (lack of associativity and inverses), naturals under subtraction (same reason), set of all 2×2 square matrixes under multiplication (not every matrix is invertible).

Since the elements of groups can be operations themselves, a more explicit way to define a group is:

Definition 2.2. A group G is a non-empty set with a law of composition ◦: G × GG that satisfy the axioms (i), (ii), (iii) and (iv) of Definition 2.1.

The study of groups is usually concerned with symmetries.

For instance, the symmetries on an equilateral triangle is a group:

Group of permutations on the set {1, 2, 3} represented as the group of symmetries (rotations and reflections) on an equilateral triangle [Alcock, Lara. How to Think About Abstract Algebra (p. 138). OUP Oxford. Kindle Edition.]

I think is easer to grasp that thinking as triples and their permutations. Permutation groups are of particular interest for mathematicians, because every group is isomorphic to a group of permutations (Cayley’s Theorem). Let S3 denote the group of permutations on set with 3 elements. Naturally it has 3! = 6 elements – note the elements belonging to the group are permutations (operations). The following excerpt explains this:

Garrity, Thomas A.. All the Math You Missed ((But Need to Know for Graduate School)) (p. 194). Cambridge University Press. Kindle Edition.

A composition table can be constructed for the permutation group S3 = {e, (12), (13), (23), (123), (132)}:

Garrity, Thomas A.. All the Math You Missed ((But Need to Know for Graduate School)) (p. 194). Cambridge University Press. Kindle Edition.

Start with a triple (a, b, c). Now apply the permutation (23). You get (a, c, b). Now apply the permutation (132), it returns (b, a, c), so (23)(132) = (12). It is not commutative, as you can see (132)(23) = (13). S3 is the smallest non-commutative group known [4]. Note some compositions are returning e. It means (132) is the inverse of (123), for example.

Now, using the axioms let’s demonstrate some theorems.

Let's denote a*b as ab for simplicity throughout these theorems.

Theorem 2.1. If G is a group and a, b, c are elements of G then:
(i) ab = ac, implies b=c.
(ii) ba = ac, implies b=c
 
Proof. 
(i) Let ab = ac. 
Then a'ab = a'ac.
By associative law, (a'a)b = a'(ac), 
Then eb = ec if, and only if, b=c. (q.e.d)

(ii) Let ba=ac
Then a'ba=a'ac <-> a'ab =a'ac <-> b = c (q.e.d)

Note, since groups are not required to be commutative we cannot state that ab=ca (always) implies b=c. Actually what this theorem is proving is that * is symmetric on G: a*b=b*a. 

Theorem 2.2. If G is a group and a, b are elements of G, then:
ab=e implies a=b'and b=a'.
(This theorem is saying that if ab=e, then a and b are inverses of each other.)

Proof. 
ab=e <-> a'ab=a'e <-> (a'a) b=a'e <-> b=a'

And,
b'ab = b'e <-> (b'b)a = b' <-> a=b'.(q.e.d)

Theorem 2.3. If G is a group and a, b are elements of G, then:
(i) (ab)' = b'a' (the inverse of a product is the product of inverses on a reverse order)
(ii) (a')' = a 

Proof.
For (i): 
b'a' = (ab)' <-> (ab)(b'a') = (ab)(ab)' <-> a(bb')a' = (ab)(ab)' <->
a(e)a' = (ab)(ab)' <-> e = (ab)(ab)' iff,
(ab)' = (b'a') (q.e.d.)

For (ii):
a'(a')' = a'a <-> a'(a')' = e <-> aa'(a')' = ea <-> e(a')' = ea <->
(a')' = a (q.e.d.)

Theorem 2.4. If G is a group and let a ∈ G, then the inverse of a is unique.
Proof. 
Let e be the identity of G. Let a, b, c G, and suppose both b and c are inverses of a. We have to show that b = c. By associativity:
c = ce = c(ab) = (ca)b = eb = b  
c = b (q.e.d)

(We will denote the inverse of an element a either as a’ or a-1.)

Some comments on Theorem 2.4. We have showed that if ab = e (b is the right inverse of a) and ca=e (c is the left inverse of a) then b=c. This implies that:

  1. If a is an element of a group with a right inverse b and a left inverse c, then b=c.
  2. If a is an element of a group with a right inverse b, then b is the inverse of a.
  3. If a is an element of a group with a left inverse c, then c is the inverse of a.

By (1), (2) and (3) we are saying that for an element x of (G, *), there is an inverse y, by verifying that x*y = e or y*x = e. No need for both computations.

Subgroups

A subgroup is not only a subset of a group. It is a group, since it has to satisfy all the axioms that define a group. The image below shows what a subgroup is at a glance.

S as a subgroup of G (Pinter, Charles C.. A Book of Abstract Algebra: Second Edition (p. 49). Dover Reprints. Kindle Edition.)

Let be a group, and a nonempty subset of G. It may happen (though it doesn’t have to) that the product of every pair of elements of S is in S. If it happens, we say that S is closed with respect to multiplication. Then, it may happen that the inverse of every element of S is in S. In that case, we say that S is closed with respect to inverses. If both these things happen, we call S a subgroup of G. An important point to be noted is this: if S is a subgroup of G, the operation of is the same as the operation of G. In other words, if a and b are elements of S, the product ab computed in is precisely the product ab computed in G.

We can see why this is true. To begin with, the operation of G, restricted to elements of S, is certainly an operation on S. It is associative: for if a, b, and c are in S, they are in (because ⊆ G); but G is a group, so a(bc) (ab)c. Next, the identity element of G is in S (and continues to be an identity element in S) for S is nonempty, so S contains an element a; but S is closed with respect to inverses, so S also contains a–1 ; thus, S contains aa–1 = e, because S is closed with respect to multiplication. Finally, every element of S has an inverse in S because S is closed with respect to inverses. Thus, S is a group. One reason why the notion of subgroup is useful is that it provides us with an easy way of showing that certain things are groups. Indeed, if G is already known to be a group, and S is a subgroup of G, we may conclude that S is a group without having to check all the items in the definition of “group.”

Note for a group G, {e} and G are always subgroups of G.

Theorem 2.5. Let G be a group and H a subset of G. Then H is a subgroup of G if and only if

 1. e ∈ H (the subset contains the identity); 
 2. ab ∈ H for all a, b ∈ H (the subset is closed); and
 3. a–1 ∈ H for all a ∈ H (the subset contains all inverses). 

Proof. Let H be a subgroup of G. Then H has an identity, f. Thus, ff = f. But also, ef = f. By cancellation, f = e, giving (1). Then, by definition of a group, (2) and (3) must hold. Conversely, suppose that (1)–(3) hold. We must check that H is a group. But by (2), H is closed. As the group operation is associative on G, it is associative on H. By (1) and (3), we have an identity and inverses as well. Therefore, H is a subgroup of G. (q.e.d)


Lee, Gregory T.. Abstract Algebra (Springer Undergraduate Mathematics Series) (p. 49). Springer International Publishing. Kindle Edition. 

Cyclic groups

Definition 2.3. If G is a group, then its order, |G|, is the number of elements in the set G. We say that G is a finite group if its order is finite; otherwise, it is an infinite group.

The word order is also used to express powers of groups elements. For multiplicative notation as an = aa···a (n times), or for additive notation na =a+a+…+a (n times).

Definition 2.4.  Let (G, ·) be a group and a ∈ G. Then the cyclic subgroup generated by a is the set of all powers of a in G, and we write <a> = {an: n ∈ Z}

Definition 2.5. Let (G,+) be a group and a ∈ G. Then the cyclic subgroup generated by a is the set of all powers of a in G, and we write <a> = {ng: n ∈ Z}

Claim. A group G is cyclic if and only if there exists an a ∈ G such that G can be generated by a, that is, G = <a>.

As an example, it is interesting noting by the definitions and claim above, that (Z,+) is an cyclic group with infinite order, generated by 1, and also -1.

Another example, take the group Z12 = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11} under addition modulo 12. (Z12,+12) is a finite cyclic group that can be generated by 1, 5, 7, 11, all relatively prime to 12.

The other subgroups of Z12 can be represented on tree diagram:

That can also be represented more concisely:

Alcock, Lara. How to Think About Abstract Algebra (p. 115). OUP Oxford. Kindle Edition.

Now take (Z7,+7). What elements generate the whole group? Every element generates the whole group, since 7 is a prime number, every other element is relatively prime to 7. That said, the only subgroups of Z7 are Z7 itself and ({0},+7), the trivial subgroup. It has no proper subgroups.

Whether they are proper subgroups or not, every element of a cyclic group generates a subgroup:

Theorem 2.6. If G is a group and a ∈ G, then <a> is a subgroup of G.  

Proof. Certainly e = a0 ∈ <a>. Take any am, an ∈ <a>. Then aman = am+n ∈ <a>.  Finally, if am ∈ <a>, then (am)−1 = a−m ∈ <a>. Now apply the Subgroup theorem.
A group table given (G,*) and G={e, a, b}, in which e is the identity:
* | e a b
=========
e | e a b
a | a b e
b | b e a

This is a pattern for any cyclic group of order 3. They are all the same up to an isomorphism.
Note:
- On groups an element can never appears twice in the same row or column.
-  When the entries on opposite sides of the diagonal are the same, the group is commutative, i.e., a*b = b*a. Remember groups are not required to be commutative. 

Suppose G={0, 1, 2}, (G, +):
+ | 0 1 2
=========
0 | 0 1 2
1 | 1 2 0
2 | 2 0 1
An important example: The Klein 4-Group
Named after a german mathematician the Klein group is defined as follows:
V={e,a,b,c}, in which e is the identity element, and a binary operation with the following table:
* | e a b c
===========
e | e a b c
a | a e c b 
b | b c e a
c | c b a e

Note that:
i) it is abelian 
ii) a2 = b2 = c2 = e

Now let's look at its subgroups:
<e> = {e}
<a> = {e, a} because a0=e, a1=a a2=a*a=e; a3=a2*a=e*a=a; a4=a2*a2=e*e=e ... 
for the same reason:
<b> = {e, b}
<c> = {e, c}
These are the only proper subgroups of the Klein group.
Say we try to get a subgroup {e, a, b}. But, a*b=c, so it is not a subgroup.
Same for {e, b, c}: b*c=a, therefore not a subgroup.
(V,*) is not cyclic, since by definition a group G is cyclic if and only if there exists an a ∈ G such that G can be generated by a, that is, G = <a>.
But, every proper subgroup of V is cyclic. 
The K-4 is the smallest non-cyclic group.
Problem. Assume that a group (G,*) of order 4 exists with G = {e, a, b, c}. e is the identity, a2 = b and b2 = e. Construct the table for the operation of such a group.

Solution. 
We can immediately write:
* | e a b c
===========
e | e a b c
a | a b ! #
b | b $ e %
c | c & / [

The unknown elements are marked with special characters. Let's see how to work this out. It is pretty much like a puzzle.

The ! element cannot be either a, b or e because it would replicate elements on the column or row. Therefore, it can only be c:

* | e a b c
===========
e | e a b c
a | a b c #
b | b $ e %
c | c & / [

Now, the element # cannot be a, b, or c. It can only be e
* | e a b c
===========
e | e a b c
a | a b c e
b | b $ e %
c | c & / [

The element $ cannot be a, b, or e. Therefore it must be c.

* | e a b c
===========
e | e a b c
a | a b c e
b | b c e %
c | c & / [

So, % can only be a:

* | e a b c
===========
e | e a b c
a | a b c e
b | b c e a
c | c & / [

Now, following the same reasoning, & can only be e, / can only be a, and [ can only be b:

* | e a b c
===========
e | e a b c
a | a b c e
b | b c e a
c | c e a b

We are done. It is worth noting  that b = a2, and (by the table) c=ab=a3 and using the power notation we could write:

* | e a b c
==============
e | e a a2 a3 
a | a a2 a3 e
b | a2 a3 e a
c | a3 e a a2

Beautiful. Indeed, any cyclic group G=<a> of order 4 has an isomorphic table to that.

Definition 2.7. Let G be a group with operation ∗ and H a group with operation •. On the Cartesian product G × H, define an operation *d via (g1, h1) *d (g2, h2) = (g1 ∗ g2, h1 • h2), for all gi ∈ G, hi ∈ H. Under this operation, we call G × H the direct product of G and H.

Note that:
∗  : G x G → G
•  : H x H → H
*d : (G x H) x (G x H) → (G x H)

From this, follows an important theorem:

Theorem 2.7. The direct product of two groups is a group.
Proof.

Note, we can find new groups using the concept of subgroups: groups that reside inside another groups. We can also find new groups by relating them through direct products.

Let’s prove some theorems (or claims if you rather) on Groups, Subgroups, Cyclic Groups to work the ideas out.

Claim. H=(2Z,+) is a subgroup of (Z,+).
Proof.
Let G=Z, H=2Z.
∀n ∈ G, ∃x ∈ H such that x=n+n
For sure, 2Z = {...-6,-4,-2,0,2,4,6...} ⊂ G.
i) H contains the identity of G.
The identity of G under addition is 0, that can be written as 0=0+0, therefore 0 ∈ H.

ii) H is closed under the group operation.
Take any x, y ∈ H. This means x=(n+n) and y=(m+m) for some n, m G.
Then x+y=(n+n)+(m+m) (by associativity)*
∀n,m G, ∃x,y H s.t. x=n+n, y=m+m H and, by closure, x+y=(n+n)+(m+m) H.

iii) H is closed under inverses
Take any x, y ∈ H. This means x=(n+n) and y=(m+m) for some n, m G.
The inverse of x is -x, which is -x=-n-n, x ∈ H, -x H
The inverse of y, is -y, which is -y=-m-m, ∀y ∈ H, -y H.

We have showed H=(2Z,+) satisfies all group axioms and 2Z ⊂ Z. Therefore 2Z is a subgroup of Z under addition.
(q.e.d.)
*You might felt compelled to write x+y=2(n+m), what is not wrong, but distributivity of multiplication over addition is not a group axiom, so it is not rigorously right. Had we claimed (Z,+) is a ring (and it is) we could have done that with no worries.

Claim. If an element x generates a group so does its inverse.
We have to prove that if G = <x> implies G = <x-1>. That is, we have to show that every element in G is a power of the inverse of x.
Proof. 
Suppose G = <x>. Take any g G. There is n integer such that g = xn.
Then, g-1 = (xn)-1 = (x-1)n.
Since <x-1> is a group, it is closed under inverses, and g-1 ∈ <x-1>, 
Then (g-1)-1 = g ∈ <x-1>, therefore G ⊆ <x-1>, and for sure G ⊆ G.
(q.e.d)

Claim. (ZxZ,+) is not cyclic. (The direct product of Z under addition is not cyclic)
Proof.
Suppose that ZxZ is cyclic under addition.
Then there is a, b ∈ Z such that ZxZ = <(a,b)>, or on set building notation:
ZxZ = {n(a,b) | n ∈ Z}.
Note (0,1) and (1,1) ∈ ZxZ, then there is an integers n, m such that (0,1) = n(a,b) = (na, nb) and (1,1) = (ma, mb).
So,
0=na; 1=nb; 1=ma; 1=mb
If na=0, then n=0 or a=0. 
If we choose n=0, then 1=0·b.
Since nb=1, 0·1=1 leads to a contradiction.
If we choose a=0, then ma=1, and m·0=1, another contradiction.
Therefore ZxZ is not cyclic.
(q.e.d)

Claim. Every cyclic group is abelian.
Proof.
Suppose G = <x> is cyclic.
Then using multiplicative notation:
Take any a, b, ∈ G. Then
a = xn
b = xm
for some m, n ∈ Z
ab = xnxm = x(n+m) 
ba = xmxn = x(m+n) 
Since (Z,+) is abelian, m+n=n+m for every m, n Z, and therefore ab=ba, and G is abelian.
(q.e.d)

Theorem H1. Let H be a subgroup of G. Define a relation R from G to H, R = {(x,y) :xy-1 ∈ H} This relation is an equivalence relation, denoted by H. 
Proof. 
Let x, y, z ∈  G. Using the notation we write:
x H y iff xy-1 ∈ H
y H x iff yx-1 ∈ H
x H z iff xz-1 ∈ H
An equivalence relation is symmetric, transitive and reflexive.

i) We say that R is reflexive if (x, x) ∈ R for all x ∈ G. 
(x,x) ∈ R iff xx-1 ∈ H. Since xx-1 = e, and H is a subgroup of G, it contains 
the identity. We just showed H is reflexive.

ii) We say that R is symmetric if (x, y) ∈ R implies (y, x) ∈ R for all x, y ∈ G. 
If (x,y) ∈ R, xy-1 ∈ H. Also, if H is a subgroup it is closed under inverses, then:
(xy-1)-1 = x-1y ∈ H. We just showed H is symmetric.

iii) We say that R is transitive if, whenever (x, y) ∈ R, and (y, z) ∈ R we also have (x, z) ∈ R, for all x, y, z ∈ G.
Note that xy-1, yz-1 ∈ G, and xy-1yz-1 = xz-1 ∈ H, .·. x H z.
We just showed that H is transitive.

Given (i), (ii) and (iii) H is an equivalence relation.
(c.q.d.)

It is worth noting how the equivalence relation on Theorem H1 is similar to an implication of the division algorithm, which allows us to work with congruences of integers mod m: 
a ~ b <-> a and b have the same remainder modulo m <-> m | (a-b)
To think about (before going to Rings):
Suppose we want to define an algebraic structure modulus 2, given the set G={0,1} and we define two binary operations:

(G,+):
+ | 0 1
========
0 | 0 1
1 | 1 0
and

(G,·):
· | 0 1 
========
0 | 1 0 
1 | 0 1 

Is that (G,·) even a group? Yes it is. Indeed its table is isomorphic to the "well-behaved" (G,+) table.

It happens we do not get distributivity of multiplication over addition:
0(1+1) = 0·(0) = 1
and
0·1 + 0·1 = 0 + 0 = 0

Okay. Lets fix this table and get rid from this "nonsense" 0·0=1,and do as we would for regular multiplication. 
(G,·):
· | 0 1 
========
0 | 0 0 
1 | 0 1 

Ops. It is a monoid, because there is no inverse for 0, i.e., there is no number multiplied by 0 that would result on the identity. 

Rings

If groups can be roughly seen as algebraic structures in which there is addition, rings are roughly those in which there is addition and multiplication.

Definition 2.8. A ring is a triple (R, +, ⋅), where R is a set and + and ⋅ are binary operations on R satisfying [5]:

(1) (R, +) is a commutative group. (It is an abelian group with identity denoted by 0)

(2) (R, ⋅) is a monoid.

(3) Multiplication is distributive over addition in R. That is, for all x, y, z ∈ R, we have x ⋅ (y + z) = x⋅ y + x⋅z and (y + z) ⋅ x = y ⋅ x + z ⋅ x.

Condition (2) tells that commutativity of multiplication is not required for the definition of a Ring.

Condition (3) tells that there are two kinds of distributivity: left distributivity and right distributivity. The only case where one always implies another are on commutative rings, when multiplication is commutative.

A ring that does not have multiplicative identity is “almost a ring” or a “rng”. In this case, (R,· ) is a semigroup.

Examples and comments

  • (Z,+) is closed, commutative, associative, with an identity 0 and every element has an inverse. .
  • (Z,·) is closed, commutative, associative, with an identity 1 but lack inverses. Therefore (Z,·) is a commutative monoid.
  • Also multiplication is distributive over addition in Z: k·(m+n) = km + kn, for k, m, n integers.
  • (Z,+,·) is an algebraic structure called commutative ring.
  • (2Z, +, ·) is almost a ring, because (2Z, ·) is a semigroup and not a monoid. It lacks multiplicative identity. More generally for n integer, n ≠ 0, n ≠ ±1, (nZ,+,·) is a “rng”.
  • (N, +, ·) is a not a ring because (N, +) is not a group (it lacks the additive inverse). (N,+) and (N, ·) are a commutative monoids. This structure is called a semiring.
  • The main difference between a ring and a semiring is that in the later there can be elements that do not have additive inverses.
  • (Zn, +n, ·n) for n integer > 0, is a commutative ring.
  • When the product of two elements is zero, we say they are zero divisors. In (Z6, +6, ·6), 3·2 = 0, we say 3 and 2 are zero divisors.
  • When a ring has no zero divisors we say it is a domain.
  • A commutative domain is said to be an integral domain.
  • The cancellation laws for multiplication in a ring can fail even if we exclude 0. For example in (Z6, +6, ·6) we have 1·3= 3 = 5·3 (but 5 ≠ 1).
  • The cancellation laws for multiplication in a ring can be stated as: if a, b, c ∈ 𝑅 with a ≠ 0, then ab = ac -> b = c, and ba = ca -> b = c.
  • We define subtraction and division as: a – b = a + (-b), and if b has a multiplicative inverse a/b = ab-1.
  • Let 𝑅 = {0} and define addition and multiplication on 𝑅 by 0 + 0 = 0 and 0 ⋅ 0 = 0. Then (𝑅, +, ⋅) is a ring with additive and multiplicative identity both equal to 0. It is very easy to verify that each of the ring axioms hold. This ring is called the zero ring.

Now, let’s prove some theorems.

Theorem 2.8 The sum of two even integers is even.

Sounds obvious. But it is not :). There is no axiom about rings telling that. We need to use them to prove it.

Proof.
Recall that (Z,+,·) is a ring, and we shall use its axioms.
Any even integer m, n, can be written as m=2j, n=2k, for j, k also integers.
So m+n=2j+2k.
Using the distributive property of multiplication over addition:
m+n=2(j+k).
Since Z is a ring, j+k is also a integer, by the closure property.
.·. m+n is even. (q.e.d.)

Theorem 2.9. The product of two integers that are each divisible by k are also 
divisible by k.
Proof.
Recall that an integer n is divisible by k (k|n) if there is another integer m, such that n = km.
Let n, m, a, b, c and k be arbitrary but specific integers.
Suppose n and m are divisible by k. Then we can write: n=ka and m=kb.
By multiplicative associativity:
nm = kakb =(ka)(kb) = k(a(kb)). 
By closure under Z, we let c=a(kb),then write nm=kc.
Therefore nm is divisible by k. (q.e.d)

Theorem 2.10 Let (R,+,·) be a ring with additive identity 0, and let a be any element of R. Then 
i) a·0 = 0 and 
ii) 0·a = 0.
Proof.
i) We have to show that a·0 = 0
Then,
a·0 = a·(0+0) (additive identity)
a·0 - a·(0+0) = a·(0+0) - a(0+0) (subtracting a·(0+0) on both sides)
a·0 - a·0 + a·0 = a·0 + a·0 + (-a·0) + (-a·0)  (left distributivity)
(a·0 - a·0) + a·0 = (a·0 - a·0) + (a·0 - a·0) (associativity)
(a·0 - a·0) + a·0 = (a·0 - a·0) + (a·0 - a·0) (inverse element)
0 + a·0 = 0 + 0 (addition identity)
a·0 = 0 (q.e.d)

ii) We have to show that 0·a = 0,
Then
0·a = (0+0)·a (additive identity)
0·a = 0·a + 0·a (right distributivity)
0·a - 0·a = 0·a + 0·a - 0·a (subtracting 0·a on both sides)
0·a - 0·a = 0·a + (0·a - 0·a) (associativity)
0 = 0·a + 0 (inverses)
0 = 0·a (additive identity)
0·a = 0 (q.e.d.)

Theorem 2.11. Let (R,+,·) be a ring. This structure is the zero ring, if and only if 1=0.
(1) Suppose 1=0.
Let a be an arbitrary but specific* element in R. 
(2) Then a = a·1, by the definition of multiplicative identity.
(3) Also, we know from Theorem 2.10 that a·0 = 0. 
By (1), a·1 = a·0.
Followed by (2) and (3), a = 0. 
Therefore (R,+,·) is the zero ring. (q.e.d)
Since the zero ring has precisely one element, 1 and 0 are the same element. Put on another way, on a zero ring, the additive identity and the multiplicative identity are the same. This simple yet deep.

* "Arbitrary but specific" means we are taking a random element a, but after picking it up, it remains the same.

Fields

The main difference between a ring and a field is that on a ring it is allowed to have non-zero elements that do not have multiplicative inverses. (Z,+,·), the ring of integers under addition and multiplication does not have multiplicative inverses for elements other than 1. On the rational set Q, for every a, bZ, a number a/b has an inverse b/a. (Q,+,·) is a field.

As described on [7]: «a field is a set of elements in which a pair of operations called multiplication and addition is defined analogous to the operations of multiplication and addition in the real number system (which is itself an example of a field). In each field F there exist unique elements called o and 1 which, under the operations of addition and multiplication, behave with respect to all the other elements of F exactly as their correspondents in the real number system. In two respects, the analogy is not complete: 1) multiplication is not assumed to be commutative in every field, and 2) a field may have only a finite number of elements. More exactly, a field is a set of elements which, under the above mentioned operation of addition, forms an additive abelian group and for which the elements, exclusive of zero, form a multiplicative group and, finally, in which the two group operations are connected by the distributive law. Furthermore, the product of o and any element is defined to be o. If multiplication in the field is commutative, then the field is called a commutative field.»

Definition 2.9 A field is a triple (F, +, ⋅), where F is a set and + and ⋅ are binary operations on F satisfying

(1) (F, +) is a commutative group.

(2) (F/{0}, ⋅) is a commutative group.

(3) ⋅ is distributive over + in F. That is, for all x, y, z ∈ F, we have x ⋅ (y + z) = x ⋅y + x ⋅z and (y + z)⋅x = y⋅x + z⋅x.

(4) 0 ≠ 1.

Examples and comments

Claim: Let (F,+,⋅) be a field with N ⊆ 𝐹. Then Q ⊆ 𝐹. 
Proof:
Let n Z. Then if nN, n F because N ⊆ 𝐹. If nN, then its symmetric -n N. 
Since F is a field, n = -(-n)F.
For each nZ/{0}, 1/nF, because F is a field and multiplicative inverse property holds in F.
Now, let m/nQ, mZ and nZ/{0}. 
Since Z ⊆ 𝐹, mF. 
F is closed under multiplication, therefore m·(1/n)F.
Since m/n was an arbitrary element of Q, we see that Q is in F. (q.e.d.)
  • (R,+,·) is a field, the real numbers field.
  • (C,+,·) is a field, the complex numbers field.
  • We dot not require 0 ≠ 1 when defining a ring. (Theorem 2.11)
  • If we replace condition (2) by “(F,·) is a monoid” we get a ring.
  • Every field is a commutative ring. Conversely, not every commutative ring is a field, as the well known commutative ring (Z,+,·) is not a field.
  • Since every field is a commutative ring, it follows that every field is an integral domain.

2 Structures, Homomorphisms and Isomorphisms

Recall a set is allowed to have no special properties; elements can be unrelated. A structure, on the other hand, is a set with properties. Recall a n-ary relation on a set S is a subset of Sn. Example: R = {(x,y,z) ∈ Z3 : x + y = z} is a ternary relation on Z. The triple (1,2,3)R Z3. S = {(x,y) ∈ R2 : x > y} is a binary relation on R. The pair (8.5, 3.1) S R2 . There is no such a thing as a 0-ary relation.

A n-ary operation is a function f : Sn → S. This operation defined for every n-tuple Sn. The set S is closed under f. A 0-ary operation is a constant in S. Addition, subtraction and multiplication are examples of binary operations on R. Negation is an example of an unary operation on R, i.e., f: R R such that f(x) = -x.

A structure is a set with a collection of finitary operations and relations defined on the set. This set is the domain of the structure. We say structures have the same type if they have the same number of n-ary operations and the same numbers of n-ary relations.

When talking about groups we also studied subgroups, some are proper subgroups, some groups have not other subgroups than the trivial. Loosely, the same happens for other structures.

Comments and Examples [6]

  • Let (S,*) be a semigroup. A substructure (T,*) of (S,*) is called a subsemigroup. Notice that T ⊆ S and the operation * must be the same for both structures. Also, * is a binary operation on T, which means that T is closed under *. Since T ⊆ S, it follows that * is associative in T, associativity is always closed downwards.
  • A substructure (N,*) of a monoid (M,*) is a subsemigroup of (M,*), but may or may not be a submonoid of (M,*). For example, let C = Nat ∖ {0, 1} = {2, 3, 4, …} Then (C, ·) is a subsemigroup of the monoid (Nat, ⋅), but (C, ⋅) is not a submonoid of (Nat, ·) because C is missing the multiplicative identity 1. (to avoid confusion I used Nat for the natural numbers)
  • If (M,*) is a monoid with identity e, we can define a submonoid to be a substructure (N,*) of (M,*) such that N contains e. In other words, if we wish to leave the identity out of the structure, we need to explicitly mention that the domain of the substructure contains the identity in order to guarantee that we get a submonoid.
  • Let (G,*, −1 , e) be a group, where −1 is the unary inverse operator and e is the identity of G. A substructure (H,*, −1 , e) of (G,*, −1 , e) is called a subgroup. Notice that the operations * and −1, and the identity e must be the same for both structures. By making the unary inverse operator part of the structure, we have guaranteed that the inverse property holds for the substructure.
  • Let (R, +,·, -, 1) be a ring, where – is the unary additive inverse operator and 1 is the multiplicative identity of R. A substructure (S, +,·, -, 1) of (R, +,·, -, 1) is called a subring. Notice that the operations +, ⋅, –, and the multiplicative identity 1 must be the same for both structures. By the definition of a structure, S is closed under +,·, -.
  • (Z, +, ⋅) has no subring other than itself. To see this, let (A,+,·,1). First note that the multiplicative identity 1 ∈ A. Using closure of addition and the principle of mathematical induction, we can then show that each positive integer is in A (1 generates every positive integer). But if A is a subring of Z, it is closed under the additive inverse of Z, i.e., for each positive integer n, -nA. It follows that A = Z. (Note 0 ∈ A, because the additive identity is in any (sub)ring.)

Homomorphisms and Isomorphisms

A homomorphism is a function from one structure to another structure of the same type that preserves all the relations and functions of the structure:

Definition: If 𝔄 and 𝔅 are structures of the same type with underlying domains 𝐴 and 𝐵, then a homomorphism is a function 𝑓: 𝐴 → 𝐵 such that for each 𝑛 ∈ N,

1. if 𝑅 is an 𝑛-ary relation, then 𝑅𝐴(𝑎1, 𝑎2, …, 𝑎𝑛) if and only if 𝑅𝐵(𝑓(𝑎1), 𝑓(𝑎2), …, 𝑓(𝑎𝑛)).

2. If 𝐹 is an 𝑛-ary function, then 𝑓(𝐹𝐴 (𝑎1, 𝑎2, …, 𝑎𝑛)) = 𝐹𝐵(𝑓(𝑎1), 𝑓(𝑎2), …, 𝑓(𝑎𝑛)).

In particular, 2 implies that if 𝑐 is a constant, then 𝑓(𝑐𝐴) = 𝑐𝐵.

Examples

  • Let (𝑆,⋆) and (𝑇,∘) be semigroups. A semigroup homomorphism is a function 𝑓: 𝑆 → 𝑇 such that for all 𝑎, 𝑏 ∈ 𝑆, 𝑓(𝑎 ⋆ 𝑏) = 𝑓(𝑎) ∘ 𝑓(𝑏).
  • Let (𝑀,⋆,𝑒𝑀) and (𝑁,∘,𝑒𝑁) be monoids, where 𝑒𝑀 and 𝑒𝑁 are the identities of 𝑀 and 𝑁, respectively. A monoid homomorphism is a function 𝑓: 𝑀 → 𝑁 such that for all 𝑎, 𝑏 ∈ 𝑀, 𝑓(𝑎 ⋆ 𝑏) = 𝑓(𝑎) ∘ 𝑓(𝑏) and 𝑓(𝑒𝑀) = 𝑒𝑁.
  • Let (𝑅, +𝑅, ⋅𝑅, 1𝑅) and (𝑆, +𝑆, ⋅𝑆, 1𝑆) be rings, where 1𝑅 and 1𝑆 are the multiplicative identities of 𝑅 and 𝑆, respectively. A ring homomorphism is a function 𝑓: 𝑅 → 𝑆 such that for all 𝑎, 𝑏 ∈ 𝑅, 𝑓(𝑎 +𝑅 𝑏) = 𝑓(𝑎) +𝑆 𝑓(𝑏), 𝑓(𝑎 ⋅𝑅 𝑏) = 𝑓(𝑎) ⋅𝑆 𝑓(𝑏), and 𝑓(1𝑅) = 1𝑆.
  • A field homomorphism is the same as a ring homomorphism. The multiplicative inverse is automatically preserved.

An isomorphism is a bijective homomorphism. Isomorphic structures are considered to be the same “up to an isomorphism“. It means elements may only “change names” but behave exactly the same way on both structures.

3 More on Groups

Cosets and Lagrange’s Theorem

The Lagrange’s Theorem says that any subgroup H of G decomposes G into “cosets“:

Definition C1. If H is a subgroup of G, then by a right coset of H in G we mean a subset of the form Hg, where g ∈ G and Hg={hg :h ∈ H}.

Definition C2. If H is a subgroup of G, then by a left coset of H in G we mean a subset of of the form gH, where g ∈ G and gH={gh : h ∈ H}.

The main thing to see here is that cosets are equivalence relations of H in G, and therefore they partition the group. On a commutative group, right and left cosets are always identical.

Example. Let H =  {(1), (123), (132)} be a subgroup of G = S3 = {e, (12), (13), (23), (123), (132)}. The left cosets of H in G are:
(e)H = {(e), (123), (132)}
(123)H = {(123), (132), (e)}
(132)H = {(132), (e), (123)}
.·. (e)H = (123)H = (132)H 
you can check that (12)H = (13)H = (23)H = {(12), (13), (23)}.
Then we have 2 left cosets of H in G.
The right cosets happen to be the same:
H(e) = H(123) = H(132) = {(e), (123), (132)}
and
H(12) = H(13) = H(23) = {(12), (13), (23)}.
But we know that S3 is non-commutative, then it is not always the case.
Take the subgroup K = {(e), (12)}:
(e)K = (12)K = {(e), (12)}
(13)K = (123)K = {(13), (123)}
(23)K = (132)K = {(23), (132)}
We got 3 left cosets of K in G.

The right cosets:
K(e) = K(12) = {(e), (12)}
K(13) = K(132) = {(13), (132)}
K(23) = K(123) = = {(23), (123)}
We got 3 right cosets of K in G.

Definition. The index of H in G is the number of left cosets of H in G, denoted by [G:H].
For this example:
[G:H] = 2
[G:K] = 3

We can define right cosets using the equivalence relation of Theorem H1:
Theorem H2. Let H be a subgroup of G. For a ∈ G, let [a] denote the equivalent class of a under H (Theorem H1). Then [a] = Ha. Thus, the equivalent classes of H are precisely the right cosets of H.

Proof.
x ∈ [a] <-> x H a <-> xa-1 ∈ H <-> x ∈ Ha
(q.e.d)
If we were writing this on additive notation: 
x ∈ [a] <-> x H a <-> x-a ∈ H <-> x ∈ H+a

From this theorem follows a corollary.
Corollary. Let H be a subgroup of G and let a, b ∈ G. Then Ha = Hb if, and only if, ab-1 ∈ H.
Proof.
Ha = Hb <-> [a] = [b] <-> a H b <-> ab-1 ∈ H
(q.e.d)
Or in additive notation:
H+a = H+b <-> [a] = [b] <-> a H b <-> a-b ∈ H

The analogous applies for left cosets: x ∈ aH <-> a-1x ∈ H

Since right/left cosets are an equivalence relation, they partition the set:

Lemma. Cosets of a subgroup H are partitions of the larger group G.

And from all the above, we can state:
Lemma. Let H be a subgroup of a group G. Suppose g1, g2  G. Then:
g1H = g2H <-> 
Hg1-1 = Hg2-1 <-> 
g1H ⊆ g2H <-> 
g2 ∈ g1H <-> 
g1-1g2 ∈ H

And we have what we need to prove the following theorem.
Theorem (Lagrange's). Let G be a finite group and let H be a subgroup of G. Then |G|/|H| = [G:H] is the number of distinct left cosets of H in G. In particular, the number of elements in H must divide the number of elements in G.
Proof. 
The group G is partitioned into [G:H] distinct left cosets. Each left coset has |H| elements; therefore, |G| = [G:H]|H|.
(q.e.d) 

Quotient Groups

We keep going. What these multiplication tables have in common?

G = {even, odds}
(G, +):
+     | even | odds
---------------------
even  | even | odds
odds  | odds | even

G = Z2
(G, +2):
+2    | 0    | 1
---------------------
0       0     1 
1       1     0
S3 X S3 -> S3

TBC

Normal Subgroups

TBD

Sylow’s Theorem

TBD

4 More on Rings and Fields

Ring Homomorphisms and Ideals

TBD

Finite Fields

TBD

Extensions Fields

TBD

Splliting fields

TBD

5 Galois Fields

TBD

References

[1] Deskins, W. E.. Abstract Algebra (Dover Books on Mathematics) Dover Publications. Kindle Edition.

[2] Pinter, Charles C.. A Book of Abstract Algebra: Second Edition. Dover Reprints. Kindle Edition.

[3] Alcock, Lara. How to Think About Abstract Algebra. OUP Oxford. Kindle Edition.

[4] Garrity, Thomas A.. All the Math You Missed ((But Need to Know for Graduate School)) (p. 194). Cambridge University Press. Kindle Edition.

[5] Warner, Steve. Abstract Algebra for Beginners: A Rigorous Introduction to Groups, Rings, Fields, Vector Spaces, Modules, Substructures, Homomorphisms, Quotients, Permutations, Group Actions, and Galois Theory. Get 800. Kindle Edition.

[6] Warner, Steve. Pure Mathematics for Beginners: A Rigorous Introduction to Logic, Set Theory, Abstract Algebra, Number Theory, Real Analysis, Topology, Complex Analysis, and Linear Algebra. Get 800. Kindle Edition.

[7] Artin, Emil. Galois Theory: Lectures Delivered at the University of Notre Dame by Emil Artin (Notre Dame Mathematical Lectures,: 0002 (Dover Books on Mathematics). Dover Publications. Kindle Edition.

[8] Saracino, Dan. Abstract Algebra: A First Course. Waveland Pr Inc. Kindle Edition.

%d bloggers like this: