|
|
|
![]() |
|
|||||||
|
||||||||
|
|
Thread Tools |
Rating:
|
Display Modes |
|
#40
|
||||
|
||||
|
Re: Quotes from the Chief Engineer and I
One thing to remember is that conventions exist for a reason. It is for some purposes important to know whether a computer stores:
a b c d e f g h i as: a b c d e f g h i or: a d g b e h c f i It is also sometimes important to know if your language interprets a two dimensional array as an array of arrays or as one array. (This usually determines where the data are actually stored.) As has been said, it can be important to know how your language, or more properly how your compiler, does storage to make certain processes faster. But in most cases if you are trying to understand a two dimensional matrix as a linear thing you are missing the point. If you had data that was appropriately thought of as linear you should use a one dimensional array. The convention exists in order to make it easier conceptually to understand the relationship between the pieces of data. I have an old boss who would probably say something like "Yes, you can conceptualize this as column, row and comment it and the code will work. But you won't. At least here." You should use a matrix when you have data that is being referenced according to two different indices or categories. Consider an example. Imagine you are storing data for one of three different starting orientations for your robot and five different initial positions. This is a case where there is no obvious reason to make one the row and one the column. When you are designing the program, if you create a 3 row by 5 column table your coding standards should absolutely dictate how that will be coded. It should not be up the whim of one individual programmer. The row major convention was most likely chosen because a majority of the people who developed the programming languages we now use read and write in a left to right, top down (in other words row major) language like English. |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|