Log in

View Full Version : PBASIC sucks


maDGag
19-01-2003, 01:19
y is parallax stil using PBASIC

primitive lower life form ?

switch to C++, Java, VB

i h8 labels

Tom Schindler
19-01-2003, 01:55
i h8 labels
If you are going to complain about something, at least take the time to completely spell out your words....

Also, think about what you are saying, in order for a language to run on a chip, it has to be compiled down to machine code. Languages such as VB and java require an interpreter to run, and, without a lot of effort cannot be compiled to machine code.

Lets try to keep the posts mainly positive.

Thanks

Tom Schindler

FotoPlasma
19-01-2003, 05:18
Word to Tom Schindler.

If you're so against PBASIC, I'd like to see you write all of your robot code in PIC assembly... PBASIC may be unnecessarilly uncool, but I don't think it's really supposed to be easy. If it were easy, everyone would do it.

The way I see it, PBASIC's faults are just another design challenge. We can't use certain things/materials/mechanisms on the robot, physically, for no other reason but to challenge us to do more with less, I believe. The same goes for the price limit, limits on the use/amount of pneumatics, etc.

I'm not too coherent right now, but I think I'm making myself clear.

We've all had our little frustrating times with PBASIC, but I think it's a good thing...

PyroPhin
19-01-2003, 08:46
Get a grip...

please tell me you arent one of those ppl who thinks we should put Linux, C++, or some other overkilll programming language on the RC. Pbasic is the easiest and best solution for FIRST, you dont need a degree in comp Sci or have to be an "l33t HaX0r" to use.

and i agree completly in Fotoplasma, the language is yet another design complexity.. it may be frustrating but it's another challenge.

lol, you think Pbasic is bad.. you should see the language they use for actual industrial control.. it makes the lego Mindstorm language look like Assembly *shudder*

if you have a better solution then make your own RC

~Pyro

bigqueue
19-01-2003, 09:35
Originally posted by maDGag
y is parallax stil using PBASIC

primitive lower life form ?

switch to C++, Java, VB

i h8 labels


maDGag,

What exactly is your point.....it's just another programming language.

Are you complaining about the limits of the language....well, have you ever thought about the resources available on a tiny pic micro-controller? What, do you expect virtual memory?

Is it too "difficult" to deal with?

Well, that's the way life is right? We have to get through it with what we have available to us......Engineering solutions to problems is no different.

The limits of the STAMP only require you to be more intelligent about your approach. It makes you think....which is a great big part of what FIRST is all about anyways....right?

-Quentin

Hendrix
19-01-2003, 09:44
go download the newest version, then you dont have to worry about labels

rwaliany
21-01-2003, 01:14
Well, Parallax did release a java based interpretor (javelin or something) but it's not for the RC we have.

But yeah, "well, have you ever thought about the resources available on a tiny pic micro-controller? " The Robot Controller from last year has a PBASIC 133MHZ processor which is, sadly enough, faster than one of my computers which run's Linux (Redhat 8) with X. The new RC for this year might have a processor that's even faster (PBASIC 2.5 vs 2.0).

rbayer
21-01-2003, 01:24
Originally posted by rwaliany

But yeah, "well, have you ever thought about the resources available on a tiny pic micro-controller? " The Robot Controller from last year has a PBASIC 133MHZ processor which is, sadly enough, faster than one of my computers which run's Linux (Redhat 8) with X. The new RC for this year might have a processor that's even faster (PBASIC 2.5 vs 2.0).

Sorry, but last year's was 50Mhz. Same this year.

rwaliany
21-01-2003, 02:00
I could have sworn I read 133MHZ in the Control Systems Manual or something. Oh well, if it's 50mhz then it runs at the same speed as that linux computer.

FotoPlasma
21-01-2003, 02:12
The Basic Stamp 2sx (the BS in the RC) is officially clocked at 50MHz.

There're two other processors in the RC, (PICs, if I recall correctly) so there might be some confusion about total processing power, or whatever.

rwaliany
21-01-2003, 02:17
I can't seem to find the schematics. I had a sheet with all the information about the three processors in the RC and the basic packets/communication messages.

Dave Flowerday
21-01-2003, 08:17
Originally posted by rwaliany
The Robot Controller from last year has a PBASIC 133MHZ processor which is, sadly enough, faster than one of my computers which run's Linux (Redhat 8) with X. The new RC for this year might have a processor that's even faster (PBASIC 2.5 vs 2.0).
OK, well as someone else already pointed out the Stamp does not run at 133MHz, but even if it did it wouldn't matter. You've fallen into the "megahertz myth" trap here. What that means is that you can't compare two different processors (Pentium versus BASIC Stamp) even if they run at the same speed. The reason for this is that each different type of processor does a completely different amount of work with each tick of the clock. Also remember that the BASIC Stamp (as well as all the other microcontrollers inside the OI/RC) is only an 8 bit processor.

This is exactly why it makes no sense when people compare a 2GHz Pentium to a 1GHz Macintosh and claim that the Mac is slower. It's apples and oranges.

Also the change from PBASIC 2.0 to 2.5 has nothing to do with the processor. You should be able to use new 2.5 code with an old robot controller, because the tokenizer converts it all down to the same machine instructions anyway.

BTW, it's been a while since I looked at it, but I believe the other 2 processors in the RC are PIC16c74s.

Rickertsen2
21-01-2003, 08:35
Im fine w/Pbasic but i do think vit would be cool to at least give the Javelin Stamp a try.

Matt Leese
21-01-2003, 08:42
The last I heard from FIRST (this was from Eric about a year and a half ago) was that they were looking at using some other programming language besides PBasic. However, they wanted to make sure that there would always be the option of using PBasic. I don't know what's come of it since then.

Matt

Rickertsen2
21-01-2003, 08:47
I reallt do hope they do persue something else. Its a big change going from mainly C++ Java and PHP to Pbasic.

rbayer
21-01-2003, 09:23
Originally posted by Dave Flowerday
Also remember that the BASIC Stamp (as well as all the other microcontrollers inside the OI/RC) is only an 8 bit processor.


Isn't the Stamp 16-bit, since it always chews on 16-bits of data at a time? If not, what determines the "bits" of a processor?

Dave Flowerday
21-01-2003, 09:52
Originally posted by rbayer
Isn't the Stamp 16-bit, since it always chews on 16-bits of data at a time? If not, what determines the "bits" of a processor?
The Stamp that we're using is based on a Scenix SX28AC microcontroller which is an 8 bit unit. Typically when someone refers to the "bits" of a processor it is referring to the maximum size of integer that the processor can operate on. What this means is that, at the assembly language level, an 8 bit processor can only perform operations on 8 bit numbers. So the assembly level "add" command on an 8 bit controller can only add 2 8 bit numbers. However, you can still do 16, 32, or whatever bit math by operating on the numbers in 8 bit quantities. So if an 8 bit processor wants to add 2 16 bit numbers, it first adds the lower 8 bits of the two numbers then adds the upper 8 bits of both numbers plus the carry bit from the previous operation. I hope I'm making sense here as I can tell I'm not explaining it very well.

Anyway the bottom line is when the Stamp works with 16 bit values it is really being translated into a series of 8 bit operations inside the microcontroller running the stamp interpreter.

Jeff McCune
21-01-2003, 11:56
Originally posted by Rickertsen2
I reallt do hope they do persue something else. Its a big change going from mainly C++ Java and PHP to Pbasic.

It is? Last I checked, PHP, C++ and Java all had IF / ELSE / GOTO control structures... Besides, programming is 90% about high level logic and 10% about syntax. A *good* programmer isn't limited by the language they have in front of them. They can think logically about the problem, come up with a solution, and then translate that solution into whatever syntax they have.

PBasic isn't bad. At least it's not raw assembly.

rwaliany
21-01-2003, 21:15
Originally posted by Dave Flowerday
The Stamp that we're using is based on a Scenix SX28AC microcontroller which is an 8 bit unit. Typically when someone refers to the "bits" of a processor it is referring to the maximum size of integer that the processor can operate on. What this means is that, at the assembly language level, an 8 bit processor can only perform operations on 8 bit numbers. So the assembly level "add" command on an 8 bit controller can only add 2 8 bit numbers. However, you can still do 16, 32, or whatever bit math by operating on the numbers in 8 bit quantities. So if an 8 bit processor wants to add 2 16 bit numbers, it first adds the lower 8 bits of the two numbers then adds the upper 8 bits of both numbers plus the carry bit from the previous operation. I hope I'm making sense here as I can tell I'm not explaining it very well.

Anyway the bottom line is when the Stamp works with 16 bit values it is really being translated into a series of 8 bit operations inside the microcontroller running the stamp interpreter.

Ah, that makes sense.
What would the resultant of this be, does it overflow, whats the actual resultant though in binary?

01001010 10011101 +
11001010 10100101
^^
20010101 01000010
1 00010101 01000010
00010101 01000010
... any ideas? I haven't had time to look this up. The question always comes up when I'm away from my computer.

rbayer
21-01-2003, 21:21
Yes, it will overflow. However, as far as I know there is no program-accessible carry bit that will let you know when this happens. Instead, you will just get the last 16 bits back and that first 1 will be lost.

rwaliany
21-01-2003, 21:37
hrmm, "you just get the last 16 bits back and that first 1 will be lost"...numbers are better, binary is right to left, don't you mean the "first 16" and "last 1?"

Sorry, my question was for C++ (or binary standard), which I did not state. I was mainly wondering about how - signs are stored in binary.

rbayer
21-01-2003, 21:48
It all depends on what you define as first and last. I was assuming the normal left-to-right reading order. I probably should have said you get the 16 least-significant bits and loose the most significant.

Negatives in binary: two's complement. Basically, invert everything, add 1. For example, to find -1, take 00000001, invert the bits (11111110) and add 1 (11111111). This can either be interpreted as 255 or -1.

rwaliany
21-01-2003, 22:10
" (11111110) and add 1 (11111111). This can either be interpreted as 255 or -1." Only in PBASIC

Rbayer, notice "Sorry, my question was for C++ (or binary standard)," "how signs are stored in binary."

Hence, a 16 bit integer = 32767 through -32768.

or an unsigned short int which maxes 65535 (16 bits)

which leads me to believe that there is an actual extra bit for signs when compiling in c++.

nevermind, trying to explain my question I figured it out.

00000000 00000000
^ the 32768 (16th bit (n^(16-1))) or the last bit is used for signs.

11111111 11111111 = 65535

65535 + 1 = 00000000 00000000

in signed ints,

11111111 11111111 = -32768
01111111 11111111 = 32767

All of them off counts as the number 0, I think the last one signifies negative and has the value of 1.

ex: 10000000 00000000 = -1, considering 0 is never negative. That's why you get the -32768 instead of -32767

I think this makes sense, correct me if im wrong.


v = (n^(n))-1 -> max unsigned bit integer
range: 0 to v

v = (n^(n-1))-1 -> max signed integer

range: -v+1 to v

rbayer
21-01-2003, 22:24
Originally posted by rwaliany
" (11111110) and add 1 (11111111). This can either be interpreted as 255 or -1." Only in PBASIC

Rbayer, notice "Sorry, my question was for C++ (or binary standard)," "how signs are stored in binary."


This is the same for either C or PBASIC. In order to differentiate between 255 and -1, you have to tell the compiler whether you are using a char or an unsigned char. If it's a char, it will interpret it as -1. If it's an unsigned char, it will be 255. Try this:

int main(){
int myNum=-1;

printf("Signed: %d\nUnsigned: %u\n", myNum, myNum);

}

You'll see that when interpreted as signed, it prints -1, as expected. When interpreted as unsigned, it will print 4294967295, which is the largest possible unsigned int (32 1's).

Using a similar program, you can find that -32768 is actually represented as 4294934528, which is 11111111111111111000000000000000.

32768=1000000000000000.

Invert: 0111111111111111.

Add 1: 1000000000000000.

Sign extend to 32-bits: 11111111111111111000000000000000, as expected.

rwaliany
21-01-2003, 23:46
"Using a similar program, you can find that -32768 is actually represented as 4294934528, which is 11111111111111111000000000000000."

Note: I said short int, short ints are 16 bits. I do it in short ints, because that's what i'm use to, and I don't feel like writing 32 zeros.

from my experiences when i was 13 or so working on my C server i used short ints.

V this is my experience with them

32767
Before: 0111111111111111

32767 health + 1 health
After: 1000000000000000
-32768 poor guy

Ex: 1111111111111111
-1

-1 + 1
Result: 00000000 00000000
0

Oh, lol, I didn't realize i put
"11111111 11111111 = -32768"
I'm on 6 hours of sleep for 3 days pardon me. Didn't go sleep saturday night.

I don't see the point of the inverse and add 1 you were doing.

rbayer
22-01-2003, 01:27
I'm not going to argue over this one: x86 (and most other architectures) use two's complement for representing negative numbers. Under that system, you invert all the bits and add 1. It's just the way it is. Don't believe me? Go ahead and read this (http://webster.cs.ucr.edu/Page_asm/ArtofAssembly/ch01/CH01-2.html#HEADING2-96).

Gobiner
22-01-2003, 03:49
Crap, and here I was thinking all along that the bit count of a processor was the number of bits it could use to access memory. 0x123456 vs 0x123456789abc. I guess I never did the math or never realized that 268435455 memory handles was probably enough for whatever you'd use an Itanium for.

redbeard0531
22-01-2003, 08:00
Cheap formula for twos compliment binary: 256 + x , where256 is 2 raised to the number of bits(8 here), and x is the NEGATIVE number to be converted. I used this in a comp sci class once and it worked great.

Anthony Kesich
26-01-2003, 23:56
See, i learned basic programming and logic being bored in the back of my math class (geometry, alg II and pre-calc) and creating games and solvers. My friends and classmates all loved me for it. Anyways, it set me up for real programming, since the TI-83 plus language was very limited, i had to come up with everything myself, except for basic if, then statements, goto, inputs, and displays. So yeah, programming is 10% knowing the language, 40% logic, and 50% luck and ingenuity.

-Anthony

Adam Krajewski
27-01-2003, 04:29
I am a former PBASIC hater, now reformed.
While it has its limitations, with clever programming you can do just about anything. GOTOs and all, it's still nicer than VisualBASIC. ;)

Adam

Ameya
27-01-2003, 13:00
Originally posted by rbayer
I'm not going to argue over this one: x86 (and most other architectures) use two's complement for representing negative numbers. Under that system, you invert all the bits and add 1. It's just the way it is. Don't believe me? Go ahead and read this (http://webster.cs.ucr.edu/Page_asm/ArtofAssembly/ch01/CH01-2.html#HEADING2-96).

No need to argue, since you're both right. Rwaliany is using the actual definition of two's complement (the highest bit represents the opposite of what it does in an unsigned int (so 10000000 = -128 if it's a signed bit and +128 if it's unsigned)). Rbayer is using an easy shortcut to calculating the two's complement.

Gobiner
29-01-2003, 03:16
A skillful programming isn't limited by the law, either. Wait, what I meant to say was that a skillful programmer is limited by the language, at least when we're talking PBASIC. I'd like to see someone who can make their robot do differentiation. If you're talking QBASIC versus C, then it doesn't matter. PBASIC simply doesn't do alot of things. The difference is in the context. PBASIC could do anything a person could ever want their robot to do, if they're smart enough and can devote the time to doing it. If you couldn't do everything feasible/useful I think FIRST would change things.