Please read the About and Disclaimer section.
Most ideas probably involve some level of abstraction, but those ideas that are directly related to reality (for example, ideas of tangible objects and experiences) may be merely assemblies of simpler ideas that are explainable in terms of what can be experienced. Two types of these ideas can be spoken of: perfectly precise definition ideas and nebulous definition ideas.
In order to understand the derivation of these ideas, it is necessary to first consider the inputs involved in their construction and, secondly, how these inputs are accepted. This can be done by examining the senses themselves, but rather than speaking directly of the senses initially, I will first abstract the idea of these inputs.
Consider three different types of input sensors, labeled “A”, “B”, and “C”. I speak of input “sensors” rather than inputs because we know only what we sense and not the origin of what we sense. For the sake of simplicity, these input sensors will be limited to detecting values ranging from 0 to 1. That which is detectable by “A” is in units of “a”. That which is detectable by “B” is in units of “b”. Supposing that the mechanisms of the mind cause overlap between certain input sensors, we presume “C” can contribute something to the detection of units of “b” as well as detecting things in units of “c”.
To chart this:
- A = 0 a units to 1 a units
- B = 0 b units to 1 b units
- C = 0 c units to 1 c unit + unknown b units to unknown b units
The first thing that can be said about this system is that neither A, B, nor C can detect units of “d” if “d” is measurable in some way. This means that there may be attributes of a system that are unobservable by A, B, and C, even when those input sensors are utilized in unison.
The second thing that can be said about this system is that neither A, B, nor C can detect all of the possible values even of things with attributes describable in terms of units “a”, “b”, and/or “c”.
Thirdly, nowlos itself does not directly correlate to a input sensor since input sensors themselves may overlap in the types of things they can detect. This necessitates the view of there being two levels of input sensors – one directly connected with nowlos and one directly connected with the mind. The chain of information would then be as follows:
Information > input sensor level 1 > mind > input sensor level2 > nowlos
Notice, I have not specified the location of the existence of the mind or of the sensors.
Considering these things, let us presume that the mind formulates ideas and alters the information that comes from the input sensors. Hence, what is actually sent to the innermost sensors (the level 2 sensors) does not necessarily relate directly to reality (from the outermost input sensors; the level 1 sensors).
Reconsidering the system of A, B, and C presented above, then, A, B, and C are level 1 sensors because C can detect things in units of “b” and not just units of “c”. This has direct consequences on the formulation of ideas because it results in overlapping dependencies. It is therefore introduces an element of redundancy in cases where B and C detect things in “b” units. To deny this overlap results in falsity, but it nevertheless may be within the capacity of the human mind. (I said “may be” only in remembrance of the possibility that this fact might be wrong, even though I don’t believe it is. That is, you can deny the possibility of redundancy because I might be wrong.)
Let us turn our considerations to the system of A, B, and C and, in particular, the range of values. In speaking about ideas, we can only speak about both the innermost input sensors truthfully because that is all we are able to nowlos. However, we can speak about the outermost sensors theoretically because we presume the mind uses information from those sensors. (I abstracted the concept of these input sensors to allow for inputs not customarily recognized – that is, sensors not related to the standard 5 senses – sight, hearing, taste, feeling, and smell – and to allow for overlap between them.)
The range of values from the input sensors is limited, but can be grouped. In mathematical terms, we might group them like coordinates in space, or in this case in terms of what is detectable. (Note that the outermost input sensors do not necessarily detect in terms of differences in base properties. I say nothing of the innermost input sensors.) i.e. A set of values could be represented by (a units, b units, c units), ex: (0.2, 0.8, 0.45). This works when describing perfectly precise idea definitions that are tailored to our discussion.
Ideas that are defined as perfectly precise have a strict set of base properties. Since we can only speak of a, b, and c units and not base properties, let the definition of “perfectly precise ideas” speak of detectable units instead of base properties (even though the assembly of base properties results in something detectable). Notably, this makes it susceptible to fallacy, but gives it the element of practicality in this case. In the mathematical terms described above, a perfectly precise definition would be a single set of values from sensors A, B, and C, or at least a mental perception of values A, B, and C even if such values were never detected by A, B, and C.
The benefit of such ideas is obvious – they refer to a specific set of detected properties, which would be the first step in allowing us to examine the exact state of something and decide if it fulfilled the definition of the idea (and thus we could say if something “was” or “was not” what the idea specified). It provides an exact representation of truth from the standpoint of relating ideas back to the reality they originated from.
Even still, a great number of errors can arise with perfectly precise ideas from the standpoint of practicality. First and foremost, the mind itself is not a guaranteed static storage location for ideas. Hence, it is possible that the accuracy and preciseness of the idea may be lost in the course of time. It is also possible that the idea may be changed to account for new information, thus resulting in the reinterpretation of previous experiences whose identity, definition, or accuracy relied upon the constant nature of the idea in question.
That said, nebulous ideas tend to be more possible in addition to being more practical (assuming that we make the same alteration to the definition as done with perfectly precise definitions: they describe detected values and not simply base properties). However, they are not so easy to describe in mathematical terms. Furthermore, they do not allow for direct comparisons with reality to come to a definitive conclusion. That is, they only allow us to say if something is “more” or “less” in fulfillment of the definition of an idea. (A simple example of this would be the drawing of a straight line segment. No artist I know can draw a line segment perfectly straight without faults, as would be required to fulfill a perfectly precise idea definition. But with a nebulous idea of “straight” from a practical sense, we can declare a line as “more” or “less” “straight”.)
In relation to the input sensors, a nebulous idea would be a collection of ranges and fluctuations in those ranges. For example, the values detected by A might range from 0 to 1, but the values of B might be allowed to range only from 0 to 0.1 in order for whatever_was_detected to fulfill a particular definition of an idea. Notably, it is important that “nebulous ideas” have some restrictions placed upon them from practical purposes. i.e. Something becomes “less” in fulfillment of a particular definition by changing along a certain dimension (that is, in certain units, “a”, “b”, and/or “c”), such that certain sets of values are deemed as being not in fulfillment of a definition.
A number of potential errors arise with nebulous ideas. For instance, it is possible that certain sets of values detectable by the input sensors A, B, and C are not even realistically possible yet deemed as inclusive in a definition. If such a definition includes realistic things, it then seems to imply the realism of such unrealistic things. This, amongst other reasons, is why nebulous ideas cannot accurately and precisely describe reality when they are too inclusive. The broader the definition, the more useful it may become, but the more likely it is to contain falsity and unrealistic things.
Throughout this article, I refer primarily to ideas that reference realities, as opposed to abstractions that are not meant to refer to any kind of reality but may be used for other purposes, such as analyses of hypothetical things and scenarios.
Note: I may add more to this article later.