orkid: oh that's what family day is!!! a friend of mine was like "want to go skiing on family day?".. o_O was the response from me
orkid: the things you learn here
GeeZ: i have 780V chipset ? should i use onboard video or pcie radeon x550 videocard?
kcodyjr: GeeZ, the 780V is on the board?
kcodyjr: at this time, the support is more solid for the x550; at the end of the day, it won't make a -whole- lot of difference
kcodyjr: if you're going to run unaccelerated, i bet the 780V will be a little faster
kcodyjr: let me rephrase that; an onboard chip that's designed to run out of shared system ram will be a little less slow when running unaccelerated X
kcodyjr: but there's no doubt that the r4xx and r5xx have much more complete acceleration drivers
kcodyjr: so i guess the question comes down to, do you want a "just work (as well as it's going to)" system, or do you intend to tinker with drivers and such
kcodyjr: hmm. that reminds me. if i'm ever going to play with dual cards, i'll need to find a radeon-something that fits into a pcie-x4 slot...
airlied: kcodyjr: aren't the slots the same?
kcodyjr: airlied, physical slot length differs
kcodyjr: a 1x card will run in a 16x slot but not vice versa
kcodyjr: my board as an x16 with the rv610, an x1 with a xonar d2x, and an empty x4 waiting for i don't know WTF they put it there for
airlied: no pci slots?
kcodyjr: 3 of them, all empty at the moment ;) i was thinking about loading them up with tuner cards...
airlied: you can get pci radeons stil
kcodyjr: there's a thought
kcodyjr: but something tells me that, regardless of how perfect the scheduling algorithm is, i will -never- boost the performance of a pcie x16 card by offloading work to a pci card... ;)
kcodyjr: suppose it would be useful for isolation and bus behavior correctness testing though
kcodyjr: since you're around, i wanted to ask you... how actively you're working on that kms EDID stuff - read, whether i ought to try taking a whack at making a patch for you
airlied: I won't be looking at it for at least a few weeks
kcodyjr: then i'll see if i can come up with anything. two issues i want to address:
airlied: so if you can have a go at the secondary block reading it would be cool.
kcodyjr: intermittent failure reading the first block, and no code for reading extended blocks
airlied: the parsing of the secondary blocks is only getting into shape in the X server now
airlied: I expect adding new properties for secondary blocks perhaps
airlied: or maybe just extending the 128-byte EDID property
kcodyjr: i was thinking that the blob would just become variably sized
kcodyjr: since it would have to be implemented as two read operations
airlied: I suppose parsing the first 128 bytes will allow you to knpw what should follow
kcodyjr: one 128 byte read to verify EDID version and find the block count, and a second read of calculated length
kcodyjr: i've already got most of edid1.3 being parsed in my standalone library, i've got quite a good grip on the binary format by now
kcodyjr: although it's supposedly extremely rare in the field, it is possible for the zero block to be EDID2.0 at 256 bytes for the first block
airlied: all the CEA block parsing etc is in the patches from Ma Ling on the xorg list
kcodyjr: normally it's done as an EDID1.3 extension, but native 2.0 is possible
kcodyjr: what's CEA mean?
kcodyjr: i think i saw that acronym but i forget at the moment
airlied: its the HDMI block, consumer electronics associatuin
kcodyjr: oh that's right, CEA-#### occurs a lot in the vesa docs
airlied: have you access to all the veas docs?
kcodyjr: not -all- of them, but enough to have gotten a good long way into it
airlied: it might be worth joining X.org, I think you can get themthen
kcodyjr: just the publicly available stuff
kcodyjr: which does include the GTF and CVT spreadsheets
kcodyjr: one thing i'm missing is the DMT specs though
kcodyjr: and i'm not sure what you mean by joining X.org
kcodyjr: i have put in a bug for a fd.o account
airlied: X.org has an official membership process
airlied: for contributors.
kcodyjr: ahh. well, let me work up an actual contribution first ;)
kcodyjr: one promising thing on that subject actually... once i've integrated the edid parser into my gpu library project, it should have enough information to start an X session against any kms device
airlied: btw the transient ddc failure would be nice to find.
airlied: I haven't seen it very often here.
kcodyjr: yes. that's kind of a priority for me, since it bites me regularly, though not -quite- f'kin predictably
airlied: not sure if its just incorrect timings or not
airlied: jbarnes on xorg-devel might know more also
kcodyjr: i think that's possible, but i also suspect an atombios issue
kcodyjr: when it fails to set a mode, i see it configuring DAC-8, when it succeeds, it uses TMDS-9
kcodyjr: i was going to try a retry loop and see if that changes anything; if it's timing, it should, if it's a parse failure or something, retrying 1000 times won't change a thing
kcodyjr: oddly: on the failure case, it only seems to try to read DDC once per connector, it's not like it's searching through both encoders or something
kcodyjr: cable is dvi-to-hdmi, so there's no chance of an analog signal getting displayed
kcodyjr: i've got a hunch that if i switched to a straight-through DVI-I to DVI-I, it would work, but randomly fall back to analog 800x600 as opposed to the black screen
kcodyjr: argument in support of a timing issue: this is the gddr3 variant, clocked higher at the factory. and the timing values are literal constants.
kcodyjr: hmm. how hard is it to move a userspace utility library into kernelspace?
kcodyjr: the downer to the edid parsing code that's already there, is that it assumes edid1.3 by virtue of code structure. given any other version, it can fail miserably, and i'd have to do major rework to deal with detailed and standard timings in extension blocks
airlied: kcodyjr: eyah the current edid parser needs more work alright.
airlied: porting userspace to kernel depends on what libary bits it needs and coding standards.
kcodyjr: i've been using C99 integer and boolean types in the interface; other than that, just fprintf, malloc, string.h stuff
kcodyjr: actually, is it C99? uint8_t, uint16_t, uint32_t, bool, and double. couple enums, couple structs.
kcodyjr: that's the interface of course. the implementation has a whole lot more structs. ;)
kcodyjr: although the edid standard demands certain data be in certain slots as of v1.3, the actual binary format does not. i've written to the format, so it should deal quite gracefully with older blocks
kcodyjr: grreeat. the sysfs edid attribute size is hardcoded.
kcodyjr: but i was able to make the property size variable, and it builds...
kcodyjr: whoohoo! it booted and set the mode... now let's see if it got a full block...
kcodyjr: and the retry loop is indeed preventing failures
kcodyjr: WHOOHOO! I have the full data block! even get-edid via fglrx couldn't achieve that :)
kcodyjr: and the patch is all of 7K. mind you, it just -gets- the full blob, and makes it available through the property. the kernel isn't doing anything with the additional data yet.
MostAwesomeDude: Good work.
kcodyjr: thanks :)
kcodyjr: let me pastebin the patch, you can tell me if i need to keep going before posting it to a bug
MostAwesomeDude: So, why is this again?
kcodyjr: although the 1st EDID block contains the most crucial data (unless they seriously broke standard), the additional blocks have the really juicy stuff
kcodyjr: so, goal 1 was to get the full EDID blob, not just the first 128 bytes
kcodyjr: goal 2 was to deal with the intermittent I2C failures, which i did simply by raising the retry limit from 1 to 10
kcodyjr: and it seems i left the additional pci_id in the patch. doh. ;)
kcodyjr: for the kernel to actually make use of the additional blocks, especially if they contain more modes, i'll have to pretty much tear out the whole parser
kcodyjr: brb, coffee run...
kcodyjr: so, MostAwesomeDude, i'm thinking i should keep working before submitting. that patch is only good for getting the block, and the kernel will want full parsing
kcodyjr: actually, let airlied decide if he's around, it's a patch against his tree
MostAwesomeDude: kcodyjr: It's up to you, but letting the ML see it might be helpful.
kcodyjr: i can just post to it if i'm subscribed?
MostAwesomeDude: I think so, yes.
kcodyjr: will i need to sign or anything?
kcodyjr: well, sent, i'll have to see if it bounces ;)
kcodyjr: oh, yeah, this is a test run of my parser: http://pastebin.ca/1317820
kcodyjr: the parser has completed by the time "EDID DUMP: parsed info" appears, all that output is just printf's spitting out struct fields
ttedi: I am running Ubuntu 8.10 but I have no 3D or video acceleration on my Radeon 2100 IGP (RS740). /dev/dri is empty although the radeon kernel module is loaded. Which versions of kernel, libdrm, mesa and xf86-video-ati are necessary for 3D on RS740?
airlied: kcodyjr: I can't see those pastebins for some reason
airlied: ttedi: ttedi is it an Intel or AMD CPU?
kcodyjr: hrm, either one? that's odd
airlied: kcodyjr: can't connect to pastebin.ca
airlied: ttedi: it should be in 2.6.28
kcodyjr: airlied, figures. other ideas?
airlied: kcodyjr: pastebin.com? :)
ttedi: so drm in the kernel is too old? ok, I will try 2.6.28. http://rafb.net/p/Vo5ljT18.html is my Xorg.0.log in case it is interesting
airlied: ttedi: yup if the kernel is 2.6.27 then its too old
kcodyjr: it won't be valid since i can't upload as a file, but the patch: http://pastebin.com/m193f808e
kcodyjr: and the test run output: http://pastebin.com/m3f5273fe
airlied: so redoing EDID always works? i.e. you dump the failure and retry works?
kcodyjr: it has not yet failed for me, but i've only been testing it for a few hours... and my fiance made me reboot into an fglrx kernel so she could watch tv
kcodyjr: i can post up a dmesg dump if you like
kcodyjr: no wait, i rebooted.
airlied: we used to do it 3 times as per the kernel fb edid code
airlied: but X usually always gets it straight away.
airlied: so I suspect some subtle timing issues.
airlied: which is strange you'd expect better timing in the kernel :)
kcodyjr: i can try adjusting to 3, but i did see it fail 6 times when polling after boot
kcodyjr: well, the i2c delay parameters are static; i wonder how it works as well as it does
airlied: I should buy a scope
kcodyjr: isn't i2c timing a function of the current mode?
airlied: i2c is completely separate set of lines
kcodyjr: what is it sensitive to, then
airlied: thats what I'd like a scope for :)
kcodyjr: well, meantime, retrying seems to get by
kcodyjr: next thing i need to do is get the multiblock handling correct, and start implementing specific extensions
kcodyjr: i think i found the CEA one you were talking about; the 0x02 0x03 block with the HDMI data
ttedi: airlied: I upgraded to 2.6.28 and video acceleration and desktop effects seem to work now. (glxgears is still at 380 fps but I don't know whether this is significant). thanks!
Enverex: 380 sounds like software rendering
ttedi: but Compiz works, including wobbly windows
airlied: Enverex: its an IGP I wouldn't expect it to fly.
Enverex: airlied, Still, I'd expect it to get higher than software rendering speeds. Pretty sure I get ~900 fps with software rendering on this machine...
ttedi: I got around 350 fps with 2.6.27
airlied: ttedi: does glxinfo print Yes and not mention swrast?
airlied: it might be indirect rendering works, so compiz works
ttedi: my glxinfo says "direct rendering: Yes" but it did so with 2.6.27 too for some reaseon. http://rafb.net/p/o8zE1Z22.html
MrCooper: glxinfo|grep render
MrCooper: that shows you both direct rendering and the renderer string
ttedi: direct rendering: Yes OpenGL renderer string: Mesa DRI R300 20060815 NO-TCL
MrCooper: looks good
MrCooper: I guess current CPUs may actually render glxgears faster than lower end GPUs...
Enverex: Quite possibly, and that is quite ammusing.
MrCooper: hence the mantra 'glxgears is not a benchmark', nor even a good test for working hardware acceleration
Enverex: That or IGPs really are completely useless and are just there for the port
MrCooper: texturing, pixel shaders etc.
ttedi: I would have expected glxgears to be an order of magnitude faster than software rendering. Even on an old Radeon 9200 I had more fps
MrCooper: that had dedicated VRAM
ttedi: But Compiz works with acceptable performance so I will not complain :)
Enverex: MrCooper, Thought most IGPs still shared RAM?
MrCooper: that was the point? :)
MrCooper: the 9200 probably had more memory bandwidth
ttedi: Google says that an Intel GMA X3100 gets around 1000fps and it probably has similar bandwith as an RS740
MrCooper: whatever, glxgears just isn't worth worrying about
airlied: gears: I understand its not a benchmark, but why does my friends machine go faster :-P
Enverex: I used to get 20,000 on my old GeFore 7900GTO
Enverex: I get 2200 with my X1700 Mobility with the radeon driver and I get 11,000 with my HD4850 on the fglrx drivers
MrCooper: the only useful glxgears comparison is between different drivers on the same hardware
airlied: it really measures how fast the GPU can clear the buffers :-)
MostAwesomeDude: Enverex: fglrx is several dozen times bigger than radeon.
MostAwesomeDude: They could totally have a thing "make glxgears go real fast to fool everybody" buried in there. :3
airlied: we should totally write a driver that just makes gears fgo really fast
MostAwesomeDude: airlied: glxgears.gif
nanonyme: MostAwesomeDude: Loads of code no one knows anymore why it's there but no one wants to remove either because it might break something? :p
Enverex: MostAwesomeDude, Well 2200 was expected as that laptops card is much slower than the 4850 and the radeon driver isn't that great 3D wise yet. It was more the nvidia/fglrx comparison that concerned me as the 4850 is 3 gens newer
MostAwesomeDude: nanonyme: I bet that most of that is memory management and fast state management. Moar code means moar speed.
MostAwesomeDude: Enverex: Apples and oranges.
nanonyme: You mean the exact opposite? :~) (Not that a lot of the code couldn't be related to memory management, after all, it probably includes some equivalent of GEM)
Enverex: MostAwesomeDude, True, I just wish Wine could sort out the shader issues with ATi cards too
nanonyme: Wonder if everyone including nVidia will decide to go opensource when Gallium3D and GEM are ready. ;)
ttedi: nanonyme: look at VIA, they open sourced only due to OEM pressure
Enverex: Are we to expect leaps and bounds in the radeon/hd drivers this year?
osiris_: how are the VAP output vectors mapped to RS input packets?
osiris_: I thought that I've figured it out, but then there're these fields: RS_COUNT.IC_COUNT 4bits width, but why it's 4 bits if docs say that max value is 4 (3 bits would be enough)
osiris_: MostAwesomeDude,airlied: any idea?
glisse: osiris_: sometimes there is more bit than necessary
glisse: likely at one point people designing this part thought that they would allow more than 4 colors
osiris_: what about RS_COUNT.W_ADDR? where does it come from?
nanonyme: Hrm, if they need pressure, they must not be understanding the big picture properly...
nanonyme: Having full compatibility for your hardware device out of the box is a pretty awesome thing.
glisse: osiris_: doc are missing W_COUNT
glisse: it's bit 11 of RS_COUNT
glisse: and W_ADDR simply tell which one should be used for W
osiris_: glisse: which one what?
glisse: which ip instruction generate the W value
glisse: if you don't want to use w value coming from the vertex
osiris_: glisse: doc says that W_ADDR is relative input packet location
glisse: input packet are ip
osiris_: glisse: TEX_PTR and COL_PTR are also relative input packet locations, and these are in RS_IP_[0-7]
glisse: seems r5xx & r3xx are different with this w things
glisse: osiris_: so you got a max 32 values as input
glisse: and w_addr tells which one is w
osiris_: glisse: but how would I know what's in the input and which order?
glisse: osiris_: it's in the vertex shader
osiris_: glisse: hmm, so if vertex shader outputs pos, color1, tex2 and tex3 how would the RS input packet look like?
glisse: pos is already handled
glisse: you only need to take care of color1,tex2
glisse: and vertex shader select which of the 32ouput it write things to
glisse: so if color1 is writen to 0,1,2,3 then you use them in the rs for color
glisse: note that you better always pack things in order to minimize the number of ouput you use (among the 32)
glisse: as i believe the number of rs thread is a function of # of output used
osiris_: glisse: if it would be like you're saying then in case of non tcl hw where I have to put the vertex attributes in fixed locations I will have to set COL_PTR(3), TEX_PTR(8) and TEX_PTR(9) and that's certainly not the case in the current code
glisse: osiris_: as i said you pack things
glisse: col3 become col0
glisse: tex8 become tex0
glisse: tex9 become tex1
osiris_: glisse: still I can't see how Wpos falls into this
glisse: wpos is added only if you need it in the frag shader
glisse: otherwise just don't care about it
glisse: if you need it then you need to ask to the raster to generate it
glisse: and tell from where it comes from in the input values of the raster
osiris_: glisse: but how do I set where on the pixel stack the Wpos is placed?
glisse: for r3xx i think you have to use one the color output
glisse: for r5xx it seems its bound to a color output too
osiris_: glisse: if col3 becomes col0, ... then the RS_IP would have COL_PTR(0), TEX_PTR(0), TEX_PTR(4)? (assuming that tex coords have 4 components)
glisse: well RS_IP_0 would have everythings set
osiris_: glisse: TEX_PTR(S,T,R,Q)?
glisse: field of rs_ip_0
osiris_: glisse: yeah I know, but don't know what TEX_PTR(S,T,R,Q) would mean. TEX_PTR holds tex coord input packet location (0-63)
glisse: osiris_: again there is more bit than needed
glisse: TEX_PTR select one of the 32 possible float input
osiris_: glisse: I don't know what you mean by TEX_PTR(S,T,R,Q)
glisse: RS get a maximum of 32 different float as input
glisse: S=first component of texture coordinate ...
glisse: S=X, T=Y, R=Z, Q=W
glisse: so a 4 component texture coordinate use s,t,r,q
glisse: a 2 component, s,t
glisse: q is often used for perspective correction
osiris_: glisse: yeah, but TEX_PTR holds ints 0-63, so then what do you mean by writing TEX_PTR(S,T,R,Q)?
glisse: i think the opengl spec describe typical use of s,t,r,q
glisse: osiris_: let say you got 1 texture
glisse: and need one coordinate
glisse: well 2 coordinates
glisse: vertexshader or swtcl write, the vertexposition (first 4 float), then 2 float corresponding to texture coordinates
glisse: (assuming there is no color)
glisse: so you got :
nanonyme: This is probably an obvious question but does TEX_PTR mean texture pointer or what?
glisse: input=x, i=y, i=z, i=w, i=xtexcord, i=ytexcord
glisse: then you set rs_ip_0, tex_ptr_s to 5
glisse: tex_ptr_t to 6
glisse: tell that you got 2 tex component in rs_count
glisse: and 0 color in rs_count
osiris_: glisse: there's no such a field as tex_ptr_s, tex_ptr_t, ..., only one tex_ptr field
glisse: yes in rs_ip_0 reg
glisse: there is
glisse: then in rs_inst_0 you set tex_id=0, tex_cn=1, tex_addr=1 (write to frag shader register 1)
osiris_: glisse: ok, now I know where's the confusion coming from. you're talking about r500 hw, and I'm talking about r300
glisse: tex_id=0 mean you take input from rs_ip_0
glisse: it's the same on r3xx
osiris_: in r300 there's only one common field tex_ptr
glisse: because you can get component from separate input
glisse: so tex_ptr=5 in my example
glisse: then in r3xx you set sel_s,sel_t,sel_r,sel_q
glisse: r3xx is less flexible then r5xx
osiris_: glisse: ok, I get that part know. I just didn't know what would TEX_PTR(s,t,r,q) mean in r300 hw
glisse: as texture componnent need to be packed ie 4 component of same texture coordinate need to be adjacent, i+0, i+1, i+2, i+3
glisse: got to go bbl
glisse: anyway for w afaik it has to be interpolated using a color output
osiris_: glisse: so in your example if I wanted to use W attrib, I would have to set W_ADDR to 3?
glisse: if you want w in frag shader
osiris_: glisse: ok, thanks :)
osiris_: glisse: there's one problem. If I wanted to write 4 colors I wouldn't be able to address 3rd and 4th color because the COL_PTR is only 3 bits width.
agd5f: osiris_: think of the RS as mapping an input stream to the pixel shader. you associate offsets in the input stream with colors and text coords
lucky711x: hello all Im having a bit of trouble with a new video card i bought its a radeon hd 2600 pro 8x AGP 512MB RAM. When I install it and plug the 6 prong power adapter into it, I cannot turn on my computer. When I unplug the power adapter the computer will come on. whats the deal here. My PSU is a Vantec VAN-460N
osiris_: agd5f: yes, that's how I think of it but I have few problems with understanding few regs. e.g. TEX_PTR is pointer to the first tex coord component in input stream. if COL_PTR is the same for colors, I wouldn't be able to put all four colors onto the pixel stack because color2 would start at offset 8 and COL_PTR is only 3 bits width
osiris_: agd5f: or maybe RS assumes that color always has 4 components and COL_PTR is actually offset/4
scsiraider: is anyone still working on radeon KMS or is all the halted for r600+ support
adamk: Last I heard, airlied was still working on KMS.
scsiraider: yeah thats what i heard like in October/November
scsiraider: but idk if there is anything new since then
GNUtoo-desktop: hello, I have a radeon x700 with the free software driver...I was making a backup of my data(my LUKS partition table was gone)...when it froze(i went back to the computer,moved the mouse on another gnome-terminal tab and it froze!!!)...the keyboard and the wifi network both seem gone...is there a way to access it? or to get back the control of the computer
GNUtoo-desktop: mmm...plugging a external keyboard doesn't work
GNUtoo-desktop: I used no 3d-acceleration nor switched to a real-console...
GNUtoo-desktop: what I need to know is if the computer is still copying the data or not
adamk: If it's that locked up, it seems really unlikely that the machine is alive at all.
GNUtoo-desktop: ok thanks a lot
GNUtoo-desktop: because I already have problems with the radeon driver...but not like this one...
adamk: Well, while it could certainly be the radeon driver, it could also be any driver in your system that caused this lock up.
GNUtoo-desktop: adamk, ok thanks
adamk: I've never seen a complete system lockup when only doing 2D operations. You could try replicating the problem after disabling the dri extension in your xorg.conf file.
bridgman: scsiraider; KMS needs memory management, which in turn needs changes in the rest of the driver stack (radeon & drm in this case); airlied has kms and mm running, changes made to radeon, he's now working on changes to mesa
bridgman: since mesa has different hw drivers for each generation of radeon hw, he's now merging r100/r200/r300-500 hw drivers into a single driver so that mm support only needs to be added once rather than 2 or 3 times
bridgman: all needed for kms unfortunately ;)
bridgman: that's my dim understanding anyways
scsiraider: thanks for the update
chithead: GNUtoo-desktop: does it react to magic sysrq? or acpi events such as lid/power button?
GNUtoo-desktop: chithead, i've rebooted and I couldn't use the keyboard even USB
bridgman: oops, should have said "radeon and mesa in this case"; kms and mm already change drm
revx: has anyone implemented fb console on top of KMS?
adamk: revx: That's pretty much how it works :-)
adamk: To my knowledge, that's pretty much how it has always worked.
revx: adamk: I mean a more generic KMS fbcon driver
revx: (IE: driver independant)
revx: hw driver*
revx: it could be that way, I havn't looked at KMS since I lack intel hardware :P
bridgman: revx; not sure if it has been done yet but the intention is to have all of the existing kernel graphicky things run over KMS if it is present. WIthout that KMS is just another driver...
nanonyme: bridgman: Do you think the closed drivers will be leaning towards KMS too?
bridgman: nanonyme; not sure yet; the driver uses internal APIs that are different from the open stack and part of the "joy of KMS" is that the other kernel graphics drivers get changed to use the open KMS API as well, so only one driver is controlling the hardware
bridgman: unless we change everything to use all of the open APIs we would either have to have modified versions of all the other kernel graphics bits or would lose one of the main benefits of kms
bridgman: for the primary target of the fglrx driver (commercial workstation on a small number of enterprise OSes) it's probably going to be 1-2 years before KMS is a factor
revx: speaking of....
bridgman: of ?
revx: bridgman: I spent years doing CAD work but I don't know anymore what Linux cad software is out there!
revx: I know pro-e, I don't know about solidworks etc
bridgman: I'm not real current on that myself, but my understanding is that a lot of proprietary systems are implemented on Linux (actually were implemented on Unix and ported across)
bridgman: the biggest companies tend to design their own CAD systems and heavily integrate them with downstream production systems
bridgman: it's an interesting market; totally unlike consumer Linux ;)
revx: hah, I remember my huge project with a lawn mower engine
revx: when it came time to make some final renderings the x86 machines I was using didn't have the virtual memory space to render the project!
revx: so I started bringing my laptop in (K8 with crappy laptop harddrive and XPRESS200M graphics)
revx: CAD work on that with a large assembly was painful but necessary...
revx: bridgman: I liked the results: http://tehfoo.homelinux.org:40000/~foo/Mites.jpg (~800K)
revx: I didn't print that out but I have that around since the 49MiB bmp is overkill :P
revx: bridgman: strangely enough the largest invidividual part file in that is the gas cap!
revx: it's larger than even the cast iron block parts!
bridgman: I'm on 21Kbps dialup; am I going to regret clicking on that link ?
z3ro: bridgman: probably
revx: bridgman: 800K/20K/s worth of regeret :P
revx: bridgman: 800*8K/20K/s worth of regeret :P *
bridgman: back in 5 minutes ;)
z3ro: revx: btw nice modeling. I wish I was able to do stuff like that, but I think I stick to the code side of things.
z3ro: at the moment I borrow a lot of media from a few games for engine testing. :)
z3ro: and I guess CAD models are way more detailed and harder to do.
revx: z3ro: I never got to the second part of that: we were going to see how big of a turbo we could put on it before it blew up
z3ro: hahaha :)
bridgman: still downloading...
revx: z3ro: in the upper right panel you can see a white flap next to the fly wheel.. that really bothers me
revx: it's the governor for the thing -- the fins on the flywheel direct air at it
revx: I question how consistent it would be with a lot of air moving around it.. (like had we gone forward with the 10hp RC car idea ;P)
spstarr: hullo bridgman
bridgman: hi spstarr; sorry, off doing laundry, guess I should change my nick to bridgman_laundry or something
loswillios: with an | please
bridgman: yeah, I couldn't remember the preferred separator, sorry about that ;)
bridgman: 1 bridgman|laundry
bridgman: 2 bridgman|laundry
nanonyme: Heh, toying with the idea of seeing a Microsoft representative say that about laundry and what kind of comments would follow. ^^
bridgman: ... 100 bridgman|laundry
bridgman: you mean after the "|" comment ?
nanonyme: Never mind, bad joke.
spstarr: bridgman :)
bridgman: nanonyme; we need a good Fake Steve Ballmer around here
revx: bridgman: ballmer is a good motivator(and/or slavedriver)!
loswillios: developers! developers! developers!
loswillios: cracks the whip
spstarr: breaks whip
stoned: ATI Raedon Mobility M6 < supported by radeon?
stoned: where is list of supported cards I can't seem to goole it up
mattst88: stoned, yes, the M6 is supported (and quite old)
stoned: with 3d support yes?
mattst88: stoned, yes.
mattst88: supported chipsets are here: http://dri.freedesktop.org/wiki/ATIRadeon
stoned: thank you very much
stoned: I've bookmarked it, and it helped to help me help a user in #debian to get his Xorg to work. Poor guy was updating and fglrx broke
mattst88: stoned, wait, are you asking what cards fglrx supports?
Enverex: fglrx supports M6?
stoned: mattst88, why would I ask that
stoned: I got him to use radeon, but now his dri is not working
stoned: so I've asked him to join here as well becuase I'm running out of ideas
stoned: it says to turn on verbose in libgl_debug
mattst88: you just mentioned fglrx, and we get people in here a lot who think this is the channel for fglrx.
stoned: i exported it and its still not taking it
stoned: glxinfo still says no dri, but set the verbose, when he did and i did but it won't take it
stoned: he has the dri/glx libgl1-mesa libs installed as well
stoned: and incidentally, I never knew, but I also don't have dri on the radeon driver and I'm using a very old one too
stoned: 01:00.0 VGA compatible controller: ATI Technologies Inc Radeon Mobility M7 LW [Radeon Mobility 7500]
stoned: so we both have the same problem but I just found out about mine
ttick: I'm so glad I could help
stoned: ttick, I've told them the problem
ttick: ah, okay. So it's a 'sit back and wait' thing now.
mattst88: does /var/log/Xorg.0.log show any errors?
ttick: (EE) RADEON(0): Static buffer allocation failed. Disabling DRI.
stoned: pastebin the whole thing
ttick: looks like a reasonable suspect.
ttick: okay, just a sec
mattst88: both of you have the exact same card and problem?
ttick: different card, I think. I have teh M6, not M7
ttick: "(--) RADEON(0): Chipset: "ATI Radeon Mobility M6 LY (AGP)" (ChipID = 0x4c59)"
mattst88: things like offset of 0x0 (Will use front buffer at offset 0x0) look suspicious
mattst88: agd5f, see anything obvious in that log?
mattst88: ohh, duh
mattst88: "(EE) RADEON(0): At least 12288 kB of video memory needed at this resolution and depth."
mattst88: you're asking for too high a resolution/depth for the card
mattst88: it physically doesn't have enough memory.
mattst88: you're also using a very old radeon driver, I believe.
ttick: okay, where might I change either of those things?
mattst88: resolution and depth are in /etc/X11/xorg.conf, but I think you should probably try to update the radeon driver first.
ttick: fyi: I used to hand-dittle my own X11 configs back in the day -- but i'm very glad I never popped a CRT and don't have to hand-dittle them any more.
stoned: ttick, do not use them
stoned: ttick, back it up, and have a blank xorg.conf and use dpkg-reconfigure xserver-xorg
mattst88: surely debian provides a radeon driver newer than 4.3.0?
stoned: xorg is pretty good at doing configuration itself, however, you may want to specify the radeon deive section
stoned: mattst88, he is using testing, so quite possibly yes
ttick: okay. xorg.conf is pretty light on info compared to what I remember.
ttick: hmm, dpkg didn't ask me anything about drivers -- just keyboard (and framebuffer -- which I said 'no' to)
stoned: yeh it does automatic stuff now
stoned: it should use radeon, if it doesn't you can specify that yourself
ttick: there was no difference in the xorg.conf that was just generated and my old one.
stoned: I thought you said you had a freaky custom one
ttick: sorry, that was "back in the day"
stoned: I thought it was from back int he day
stoned: like you kept it
ttick: oh, lol. jeez, no!
ttick: okay, so is there a way I can influence xorg not to blow out my memory with a high resolution?
ttick: (and of course, I really like my current resolution -- which is max for the laptop screen. Though it would be odd if hte card didn't support the same)
mattst88: version 4.3.0 is from the XFree86 days, I think you should try updating it before changing xorg.conf
ttick: hmm, okay. stoned: any idea how I can do that w/o completely screwing myself with experimental-incompatibilities?
jcristau: mattst88: no, 4.3.0 isn't from the xfree86 days
jcristau: mattst88: it was that way until 126.96.36.199
mattst88: ahh, never mind then
ttick: hmm, should I try going for the fglrx drivers, then?
ttick: I think I just got back from that swamp
ttick: hmm, perhaps I'll just cool my heels until stoned fixes his box and lets me cheat off his solution notes.
Enverex: ttick, fglrx dropped support for that card while people still drove VWs covered in flower paintings
ttick: lol, thanks Enverex. Makes me feel good.
Enverex: Ah, that's the Mobility 9000, thought it was older than that. But yeah, ATi dropped support for anything prior to the 9500 going back a few years
DanaG: hmm, any idea how to get an R600 GPU to go into low-speed mode?
DanaG: I don't need dynamic clocks; I just don't want the thing wasting all my battery life.
DanaG: I prefer radeon over radeonhd, because the latter doesn't export DPI correctly, despite having correct size and resolution shown in the log.
ttick: okay, well, thanks for the help radeon guys.
DanaG: Hmm, DynamicClocks claims to work... but it still doesn't cool down very much.
EruditeHermit: Are stream processing units used for any graphics rendering or are they solely meant for computation work on the GPU?
mattst88: graphics rendering as well, AFAIK
King_InuYasha: bridgman, you are there?
bridgman: yep, Stream Processors are just another name for the shader cores; shaders used to be programmable but only in a very limited way; these days they are really general purpose floating point engines wrapped with some fixed-function graphics bits for rasterization and texturing
bridgman: Kin_InuYasha; no
King_InuYasha: umm, I have been unable to use the latest Catalyst driver with Ubuntu Intrepid
King_InuYasha: 8.12 with 8.10
King_InuYasha: in fact, whenever i do use it, Xorg crashes utterly
King_InuYasha: and completely
King_InuYasha: and the fact is, I need OpenGL 2.1 support for my emulators
King_InuYasha: they don't work without it
bridgman: yeah, I'm not sure what the problem is there. It works for some folks but not others... seems like the problems are on 64-bit systems but that's just what I notice on the forums
King_InuYasha: mine is 32-bit
bridgman: ok, so much for that ;)
King_InuYasha: i don't even own a 64-bit capable system
King_InuYasha: even though I do want to
King_InuYasha: my brother has been pestering me about setting up the PS2 emulator so he can play Kingdom Hearts 2
King_InuYasha: since our playstations don't work anymore :(
bridgman: does the fglrx version bundled with Intrepid work for you ?
King_InuYasha: no version of fglrx works
bridgman: have you tried it on a fresh install or only after trying other drivers first ?
King_InuYasha: be it bundled, official, or even that leaked beta
King_InuYasha: in fact, i went through three hours trying to get rid of that blasted leaked beta after my brother cajoled me into installing it
King_InuYasha: i didn't even want to do it
King_InuYasha: I hate unofficial betas
King_InuYasha: my entire house runs on ATI cards
King_InuYasha: so this really sucks
EruditeHermit: what errors are you getting?
King_InuYasha: one error was a backtrace to radeon_dri.so
bridgman: yeah, we try to keep the betas limited for a reason; the devs get involved directly on beta issues but we can only do that for a limited number of people
King_InuYasha: sometimes, there is no error
King_InuYasha: it just doesn't work
King_InuYasha: sometimes it says, "No devices detected, no screens detected"
King_InuYasha: note, that this is a laptop, so there isn't a "card" to do anything with
EruditeHermit: bridgman: were you ever able to find out if r300 support was dropped in fglrx btw
EruditeHermit: King_InuYasha: perhaps in the bios is the video card selected?
EruditeHermit: it might have been disabled
EruditeHermit: for some reason
King_InuYasha: it is
King_InuYasha: it was set on auto
King_InuYasha: but now it is set to purely LCD
King_InuYasha: does fglrx even support Xrandr?
King_InuYasha: i wouldn't be surprised if it didn't, considering this IS a commercial driver
bridgman: RandR1.2 support was added fairly recently; maybe 2-3 months ago
King_InuYasha: that is surprising
King_InuYasha: no offense, but your track record with your proprietary driver isn't all that great
King_InuYasha: its gotten a lot better now though
King_InuYasha: the main reason I stuck through it was because I hate nVIDIA more than me being unable to use fglrx with AIGLX on ATI
King_InuYasha: and im reaping the rewards :D
King_InuYasha: FOSS drivers for all my machines that don't run emulators
bridgman: we've had AIGLX support for about 15 months now, haven't we ?
King_InuYasha: but it hasnt exactly worked properly until the last few
King_InuYasha: still, you guys are quickly becoming one of my favorite companies
AlanasAnikonis: i've been able to use only the 8.5 driver
King_InuYasha: AMD seems to be bringing you guys to a new level of awesomeness
bridgman: actually we started all of the fglrx work before AMD; roughly a 3 year project
bridgman: the open source stuff was definitely encouraged by AMD though
bridgman: Linux is really important to AMD
King_InuYasha: of course it is
King_InuYasha: it was the staple of AMD64 marketing
AlanasAnikonis: i'd love to learn to code graphics drivers, but .. there's so much one needs to learn before actually understanding all of it :(
bridgman: I guess I should say X really, not just Linux
bridgman: better to start now, it isn't getting any easier ;)
AlanasAnikonis: book me for a seminar! :P
King_InuYasha: especially with the radical changes coming up ;)
AlanasAnikonis: i mean, it would help if someone was actually lecturing about how a driver interacts with the kernel or X
AlanasAnikonis: and all that, i've been a high level Java guy forever
bridgman: that's a good idea; I'll talk to agd5f about it
AlanasAnikonis: but i'm so tired of not knowing how to trace it when system goes black
bridgman: once things settle down a bit ;)
King_InuYasha: its sad that Universities don't offer profs or guides on programming anymore
King_InuYasha: I want to learn to be a computer programmer
King_InuYasha: but I have no way of getting started
bridgman: there's only one way to learn
King_InuYasha: doing it?
AlanasAnikonis: there are many ways to make your learning more efficient :P
King_InuYasha: I even came up with two little mini projects for a goal to completion
AlanasAnikonis: and I don't wanna figure out all things for myself when someone more experienced could help out in the beginning
King_InuYasha: one was GStreamer support in the VirtualDub application, and another was to port the VirtualDub GUI to Qt
King_InuYasha: i tried learning wxWidgets, but I quickly got very lost....
bridgman: King_InuYasha; which GPU do you have ? I noticed that on Phoronix you mentioned the Mobility 9000...
King_InuYasha: I have both the Mobility 9000 and the Mobility 9600
King_InuYasha: or was it the Mobility 9700?
King_InuYasha: i dont know really, since its kinda hard to check in Ubuntu with aticonfig segfaulting and all the X config tools removed from the distro
AlanasAnikonis: I've had the X1950 for two years + 1 months now
MostAwesomeDude: King_InuYasha: To learn to program, read a book on the language you want to learn about, and then write something in that language.
King_InuYasha: my desktop, which currently runs Windows
King_InuYasha: has an ATI X1300
King_InuYasha: the reason I haven't moved to Linux is because my ATI TV Wonder 650 isn't supported in Linux
King_InuYasha: and I need that functionality
AlanasAnikonis: i don't even dare to upgrade my gfx drivers anymore...
King_InuYasha: neither do I on Windows
King_InuYasha: I have to on Linux
AlanasAnikonis: that's how little trust I have in them :(
EruditeHermit: King_InuYasha: fglrx doesn't work with 9600 right now or 9000
bridgman: King_InuYasha; some folks are reporting problems with R300-family GPUs on the last few releases of fglrx; that might be what you are hitting
King_InuYasha: I kinda noticed that -.-'
bridgman: we haven't been able to repro them in house yet AFAIK, but my info is a couple of weeks old
EruditeHermit: King_InuYasha: use the radeon driver for them for now
King_InuYasha: if you guys do remote assistance debugging, I would be willing to temporarily allow one of you guys at ATI to look into it
King_InuYasha: EruditeHermit, I am
EruditeHermit: King_InuYasha: I managed to get the 9-2 beta driver working with r300 though
EruditeHermit: King_InuYasha: have you tried that one?
King_InuYasha: but my emulators don't work because FBOs aren't supported in FOSS driver and the 9.02 beta driver destroys my Xorg install completely
King_InuYasha: it doesn't even load up to crash
EruditeHermit: I see
King_InuYasha: it just drops straight down into terminal
EruditeHermit: perhaps try the next release that they have
EruditeHermit: they are late on it though
bridgman: EruditeHermit; is it February ?
King_InuYasha: Jan 25, 2008
EruditeHermit: no, but its usually earlier in the month
King_InuYasha: like about Jan 20-22
King_InuYasha: that's when most of the releases were
EruditeHermit: the last one was the 12 or something
bridgman: yep; the holidays pushed december stuff earlier and january stuff later
EruditeHermit: so its a longer one
EruditeHermit: well, I hope its better for having more time =)
King_InuYasha: do you guys at ATI even have the earlier R300 chips available to test with?
King_InuYasha: and I know the Mobility 9000 will not be supported in fglrx
King_InuYasha: since that is R200 based
bridgman: EruditeHermit; I'm sure everyone spent their holidays testing and coding
EruditeHermit: bridgman: we can dream can't we?
King_InuYasha: are you sure we can't hope?
King_InuYasha: *are you sure, we can't hope?
MostAwesomeDude: Huh. Okay, so I got tired of playing guess'n'check with the clear code.
bridgman: hoping is allowed
MostAwesomeDude: Instead, I dumped a working CS from classic Mesa, and reversed it.
King_InuYasha: what is CS?
MostAwesomeDude: So now I've got a working clear, just not into anywhere correct.
EruditeHermit: command submission
MostAwesomeDude: CS is command submission. It's a thing we use to talk to the card.
King_InuYasha: does the Catalyst driver support shaders in OpenGL?
bridgman: goes out to find the bbq in the snowdrifts
MostAwesomeDude: King_InuYasha: Yeah.
EruditeHermit: bridgman: thanks
King_InuYasha: I really am looking forward to Wine supporting Pixel shaders through OpenGL
King_InuYasha: bridgman, thanks for being here and being a helpful guy
King_InuYasha: its people like you that give more more of a reason to recommend ATI to others
MostAwesomeDude: King_InuYasha: Shaders are supported, just not GLSL.
MostAwesomeDude: Or at least they were, last time I used Wine.
bridgman: I think Catalyst supports GLSL as well, doesn't it ?
MostAwesomeDude: Oh, yeah, definitely.
bridgman: ahh, you were talking about the open drivers
bridgman: but that'll all change once that MostAwesomeDude gets Gallium3D running ;)
MostAwesomeDude: Let's see. Of the various GLSL features, I think the only ones on the card are flow control and DERIV.
bridgman: King_InuYasha; it's the only way to learn ;)
MostAwesomeDude: I actually put DERIV into the r300 driver before.
MostAwesomeDude: Stuff like NOISE is going to require big fallbacks, and hopefully those can be pushed up into state trackers.
MostAwesomeDude: bridgman: I'm getting closer by the day. :3
bridgman: you're making really good progress actually...
EruditeHermit: MostAwesomeDude: how far are you?
bridgman: it compiles ;)
bridgman: (sorry, you had to be there)
EruditeHermit: thats one hard part down =)
MostAwesomeDude: EruditeHermit: It sets a large amount of state, and it can do trivial/clear, mostly.
MostAwesomeDude: I'm currently trying to nail down the right amount of state necessary to keep the card from hanging on emit.
EruditeHermit: only for r300?
MostAwesomeDude: Right now, yeah.
MostAwesomeDude: Although technically I'm on an RV410.
EruditeHermit: well that is good
EruditeHermit: you only started a few weeks ago
EruditeHermit: so that is good progress
EruditeHermit: 3 more months and you'll have everything workng =)
MostAwesomeDude: EruditeHermit: Maybe. It is true that I code disturbingly fast. But with great speed comes bugs. :C
EruditeHermit: everyone else can fix the bugs =)
MostAwesomeDude: Meh. There's a reason I ask for permission to commit to master. :3
bridgman: given that you are implementing a new 3D API on top of a new CS API and a new memory management API, getting something running early seems like a Good Thing; there would be bugs even if your code was perfect
mattst88: MostAwesomeDude, I've never been able to wrap my head around mesa et al to contribute anything useful. where/how did you learn what you have?
osiris__: MostAwesomeDude: do you remember the method to reset the gpu engine? someone was working on it to implement the lock free driver
MostAwesomeDude: The actual Mesa stuff, I learned by reading the code. Lots of reading.
MostAwesomeDude: osiris__: It's in the r5xx accel docs, in the errata.
mattst88: you forced yourself to read Mesa code... where do you even start?
MostAwesomeDude: osiris__: RADEON_WAIT_UNTIL should be enough for most things, but the docs say how to reset the entire GPU.
MostAwesomeDude: mattst88: I started with r300_fragprog.c. I wanted r5xx support, and after poking airlied a bunch, I figured out that we needed an r500_fragprog.
MostAwesomeDude: So I read that source a bunch, and asked a lot of questions, and eventually stuff started making sense.
MostAwesomeDude: "git grep" is your friend, BTW.
mattst88: so after the initial _huge_ learning curve, is it so bad?
osiris__: MostAwesomeDude: hmm, can't find it there
MostAwesomeDude: mattst88: Not really, and you don't have to learn everything at once.
MostAwesomeDude: I didn't learn about how the fog works until I started messing with fog, and I still don't know how the vertex fetch and VPS work.
King_InuYasha: its incremental in my experience
King_InuYasha: I learned basic ANSI C when I wrote a program for my science fair project in 9th grade
King_InuYasha: really, its learning as you go
King_InuYasha: but you need to be smart about it
King_InuYasha: and a lot of times, a mentor can be very helpful
airlied: nobody knows how it all works at the detailed level :)
airlied: you just keep the high level architecture in mind, and page in the other bits as needed.
King_InuYasha: i have a friend that learned PHP by going up from HTML, to XHTML, to JS, to PHP
King_InuYasha: but even before that, he knew scripting
MostAwesomeDude: airlied: That's almost exactly how I do it.
airlied: my biggest issue is I don't keep enough notes.
airlied: and sometimes things get paged out too far :)
MostAwesomeDude: Yeah, I'm definitely taking notes.
airlied: like figuring out swtcl again is always a pain.
MostAwesomeDude: Haha, that's becase Mesa's swtcl is ridiculous.
airlied: MostAwesomeDude: I actually figured out how some of the mesa tnl code worked last week.
MostAwesomeDude: airlied: Gallium TCL is niiiice. :3
stoned: hey awesome dude
stoned: whats up
MostAwesomeDude: stoned: Not much.
stoned: you are a dude
stoned: who is most awesome
stoned: thats a crazy nickname man
MostAwesomeDude: airlied: Kernel code is complaining about one of my relocs... do I have to put relocs exactly two writes before a packet3?
MostAwesomeDude: stoned: It's a Bill 'n' Ted reference, actually.
EruditeHermit: stoned: are you a dude who is stoned?
stoned: no I'm just stoned
stoned: I never got into bill and ted
stoned: I watched 1 movie
stoned: it was insanely stupid
stoned: I felt like kiling myself afterwards
stoned: I did enjoy cheech and chong movies though
stoned: bill and ted are like that idiotic dude wheres my car kinda movie
stoned: utterly pointless
stoned: I made a social feaux pa
stoned: I accidentally made it seem that I was insulting to MostAwesomeDude's reference
stoned: thats not the case
stoned: my bad
bridgman: can you think of a more Excellent Adventure than Gallium3D ?
MostAwesomeDude: Not at the moment, no.
MostAwesomeDude: Hm. I'm supposed to do a CP_NOP after each reloc? That's kind of weird...
King_InuYasha: that movie was awesome...
King_InuYasha: the emulator might actually work....
stoned: what is gallium
King_InuYasha: just had his hopes crushed
King_InuYasha: the emulator works now...
stoned: too bad
King_InuYasha: but at approximately 8.59 FPS
stoned: your hopes were already crushed
stoned: can you put em back together or soemthing? Im just high, time to hit the sack
stoned: bye bye
King_InuYasha: i was hoping the software OGL renderer would be somewhere around 40 or 50FPS
King_InuYasha: but that was apparently too much to ask for from Mesa
MostAwesomeDude: Hm, nevermind, looks like those NOPs are being inserted automatically. So why is it whining?
bridgman: stoned; Gallium3D is a new proposed API for exposing the acceleration hardware of modern GPUs; first application is replacing the existing hardware driver subystem in Mesa (which was designed for older, fixed-function GPUs)
airlied: MostAwesomeDude: relocs are post packet
airlied: MostAwesomeDude: the NOP is the reloc
bridgman: from a radeon perspective, the interesting thing is that most of GL 2.x has been implemented in Mesa to run over Gallium3D, so in theory any GPU with a Gallium3D implementation picks up GL 2.0 support
MostAwesomeDude: airlied: I've got a simple reloc for colorbuffer offset, exactly like in classic Mesa, and libdrm's complaining.
MostAwesomeDude: "ERROR Packet 3 was 138e should have been c0001000: reg is 4e28"
MostAwesomeDude: 4e28 is colorbuffer0 offset, so that's right. But I have no control over that NOP, and radeon_cs_print says it's a NOP...
stoned: I looking up now
airlied: MostAwesomeDude: it shoudl have been a nop it got another reg
airlied: so for some reason it got a packet 0 instead of a reloc
MostAwesomeDude: airlied: Next write is to 4e38, colorbuf0 pitch, though.
airlied: yup so it was expecting a reloc nop and it didn't get one
airlied: for some reason the reloc didn't get emitted.
MostAwesomeDude: airlied: I'm looking very closely, and there's nothing except that there's an extra parameter on the Mesa RELOC macro.
MostAwesomeDude: But it's unused.
MostAwesomeDude: Write offset to CS in place of register data, and then write a reloc.
airlied: MostAwesomeDude: code anywhere?
MostAwesomeDude: airlied: Lemme pastebin.
spstarr: airlied: apparently, you can get a snapshot of each day's rawhide on mash :)
spstarr: so today's full distro build on koji mash
MostAwesomeDude: airlied: http://pastebin.ca/1318113
bridgman: stoner; here's a recent presentation on Gallium3D : http://akademy.kde.org/conference/slides/zack-akademy2008.pdf
stoned: did you call me a stoner?
stoned: namecalling isn't nice man
stoned: hey thats pretty cool
bridgman: auggh, sorry; on another forum there's a guy named "stoner" who's pretty active
bridgman: not a stoner either actually, carried a Stoner machine gun in his last tour
stoned: I don't care man, I was just messin' with ya
MostAwesomeDude: airlied: I think it's fairly solid code. Without that reloc, the card takes it and doesn't say anything, but of course I don't see any results.
King_InuYasha: what is Gallium3D?
King_InuYasha: I have been hearing a lot about it
King_InuYasha: some people have been saying its a replacement to Mesa
King_InuYasha: others have been saying its a rewrite or an addon of Mesa
bridgman: it's not a replacement for Mesa as much as a replacement for the hardware driver portion of mesa
King_InuYasha: i thought that was DRI?
bridgman: if you take a look through the link I just posted that'll give you an idea
King_InuYasha: you posted a link?
MostAwesomeDude: < bridgman> stoner; here's a recent presentation on Gallium3D : http://akademy.kde.org/conference/slides/zack-akademy2008.pdf
bridgman: yeah, maybe 5 minutes ago, when I accidentally called "stoned" "stoner"
bridgman: that ;)
bridgman: King_InuYasha; DRI is used in two different ways; one is the protocol that Mesa (or any direct rendering drm client) uses to coordinate with the X server so that they take turns using the GPU
King_InuYasha: i thought Tungsten Graphics was bought up by VMware?
bridgman: the other is "the whole thing", Mesa+DRM+part of X
King_InuYasha: why is all that so important?
bridgman: yes it was, but they're still doing Tungsten Graphicy things
bridgman: the Mesa driver portion was designed >10 years ago when GPUs were very different; it's due for a complete rewrite
bridgman: Gallium3D is that rewrite; it also happens to be useful in a lot of other ways
bridgman: s/driver/hw driver/
EruditeHermit: bridgman: supposing gallium is established in the future, do you see AMD moving to it, or are there licensing issues for moving proprietary code to it?
King_InuYasha: it looks like Gallium will offer the ability to rebuild the same drivers for other platforms?
bridgman: we already have something like Gallium3D in our drivers; we went through that change about 3 years ago on Windows and maybe 15 months ago on Linux
EruditeHermit: but will you drop your in house stuff
EruditeHermit: and go with gallium if it is determined that other stuff comes along free
King_InuYasha: so this would theoretically make it possible for FOSS drivers to be built on Windows/ReactOS too?
EruditeHermit: like easily portable drivers for windows mac etc as a result
EruditeHermit: or even video decoding
EruditeHermit: or GPGPU
EruditeHermit: or whatever work is done by others
King_InuYasha: GPGPU is seeming more and more like a tech fad with each day...
King_InuYasha: nobody besides nVidia even has a real solution to GPGPU
bridgman: EruditeHermit; unlikely, I think Gallium3D will let the FOSS 3D go from maybe 30% of fglrx performance to 70-ish %, but the last 30% are just a huge amount of work and we wouldn't want to do it separately for each OS
MostAwesomeDude: Nobody has a real problem needing GPGPU, either...
bridgman: King_InuYasha; yes
King_InuYasha: now this would be interesting
bridgman: (yes to Windows, no to only NVidia having a solution for GPGPU ;)
King_InuYasha: seeing intel, ati, nouveau, etc. on ReactOS
King_InuYasha: this would really boost the viability of the project
bridgman: MostAwesomeDude; nobody needs supercomputers either, but they sure save time
King_InuYasha: especially since none of the PnP stuff works either....
King_InuYasha: and it already does include Mesa
King_InuYasha: and I have yet to see AMD or Intel offer a GPGPU solution
MostAwesomeDude: airlied: The plot thickens. Even with no relocs set up, it still complains. Looks like adding that packet3 for the verts did it.
MostAwesomeDude: King_InuYasha: CUDA, Cg, etc. can be done on non-nVidia chipsets if the drivers support it.
King_InuYasha: are there any that do?
King_InuYasha: i just realized something horrible
King_InuYasha: all my emulators use Cg....
MostAwesomeDude: I dunno if that's actually "horrible."
King_InuYasha: well, is Cg supported by the OSS driver?
RTFM_FTW: "and I have yet to see AMD or Intel offer a GPGPU solution" ...umm AMD had a number of GPGPU solutions
King_InuYasha: or even fglrx?
RTFM_FTW: CAL and before that CTM being two of them
RTFM_FTW: and now OpenCL
King_InuYasha: ATI came up with OpenCL?
RTFM_FTW: AMD is helping to drive CL forward
bridgman: yeah, I'm scratching my head here a bit... if my internet connection was faster I would already have posted links ;)
mattst88: bridgman, why no broadband?
RTFM_FTW: in any case the CL spec itself was authored by a few from Apple
RTFM_FTW: who were BTW ex-ATI employees
King_InuYasha: you know
King_InuYasha: that part doesnt surprise me in the least
bridgman: mattst88; I live 60km out of the city, 19km from the central office so no DSL, too few people for cable, and I live on a north facing hill in a 60' pine forest so no satellite
King_InuYasha: it explains why initially the first Mac OS X-shipped Macs had an ATI card in them
RTFM_FTW: the major players on the CL front right now are (no surprise) Apple, AMD / ATI, Nvidia, Imagination Technologies, ...
King_InuYasha: ironically, it seems Microsoft is sitting this one out
RTFM_FTW: uh what?
King_InuYasha: when it was them originally that drove this kind of thing
MostAwesomeDude: How is that ironic?
RTFM_FTW: I'd suggest doing a Google for "Direct3D 11"
MostAwesomeDude: Microsoft's irony is firmly rooted in Brook's Law IMO.
RTFM_FTW: the results of that will show you that MS isn't sitting this one out
RTFM_FTW: not by any stretch of the imagination :D
MostAwesomeDude: If they appear to be sitting something out, it's only because the internal group working on it isn't very vocal.
King_InuYasha: i see
King_InuYasha: DX11 is only for Vista and Win7 because of the built in compositing manager
RTFM_FTW: and BTW I'm referring to their involvement with GPGPU
RTFM_FTW: not CL et al
bridgman: do a search on dx11 and compute
King_InuYasha: i noticed that
King_InuYasha: i stand corrected
bridgman: or Compute Shaders
King_InuYasha: but they did drive it originally
RTFM_FTW: DX11 CS is a major API feature for the DX runtime on Windows
King_InuYasha: they drove it through media decoding/encoding through the GPU in DirectX Media
King_InuYasha: this was a LONG time ago though
King_InuYasha: I'm not sure if DirectX still supports that...
King_InuYasha: so bridgman, what is your take on all this?
RTFM_FTW: more flexibility is always a good thing :D
bridgman: not much; once you get the same GPGPU standard supported across multiple vendors everyone will use it; having two cross-vendor standards (OpenCL and DX11 Compute Shaders) is a pain for the HW vendors but no worse than having OpenGL and DirectX
RTFM_FTW: honestly its the exact same picture we are in now
MostAwesomeDude: And really the only bug in the ointment is Microsoft.
MostAwesomeDude: Mostly because Microsoft-sponsored APIs get implemented no matter what.
RTFM_FTW: well GL is equally as critical in certain cases
RTFM_FTW: or CL for that matter
RTFM_FTW: the Mac platform being one such example
RTFM_FTW: since everything is driven through the GL API
RTFM_FTW: quite honestly GL on Mac OS X looks quite similar to D3D on Windows heh
King_InuYasha: well, their CoreAcceleration API does the same thing as DirectX
RTFM_FTW: it would definitely be nice to have strong GL and CL support for Linux though
King_InuYasha: CoreAnimation, etc.
RTFM_FTW: umm no CA isn't at all like DX
RTFM_FTW: none of the "Core" APIs are
King_InuYasha: what do they do then?
spstarr: RTFM_FTW: do you work on the design or developer?
spstarr: @ AMD?
RTFM_FTW: umm I write drivers
RTFM_FTW: for Mac OS X
King_InuYasha: good. O.o
King_InuYasha: then I suppose you WOULD have a very good grasp of the Apple APIs
spstarr: RTFM_FTW: so you've met bridgman in person?
RTFM_FTW: nope Bridgman is across the border :D
bridgman: nope; actually I spent a very confusing couple of days trying to figure out who this RTFM guy was; he seemed to know a lot more about ATI internals than I was comfortable with ;)
RTFM_FTW: heh I'll have to fly up to CA before I can do that :D
bridgman: the skiiing is better up here
RTFM_FTW: oh definitely
bridgman: how many "i"s in skiing again ?
bridgman: it doesn't look right with 1, 2 *or* 3 ;)
King_InuYasha: isnt skiing with 2 i's?
bridgman: yeah, that seems the closest anyways
bridgman: hey, the AMD GPGPU page finally came up ;)
RTFM_FTW: oooh awesome
bridgman: and here's the software download page : http://ati.amd.com/technology/streamcomputing/sdkdwnld.html
bridgman: I really need a faster Internet connection
bridgman: hopefully this summer I'll get enough trees cut down to make room for a garage and clear a path to the satellite
RTFM_FTW: definitely :D
bridgman: if nothing else I'll finally have enough firewood to heat the house with wood and stop using up all the propane
MostAwesomeDude: airlied: Found it. Apparently, the kernel gets freaked out if I have a packet that I *should* be relocating, but didn't.
spstarr: increases the heat +1 degree
spstarr: damin winter
spstarr: shakes fist
airlied: MostAwesomeDude: well it would, relocs are sort of mandatory :)
MostAwesomeDude: airlied: Ah, see, I didn't know that.
MostAwesomeDude: I might set up a convenience function that does checking on my side for that.
MostAwesomeDude: Anyway, so I'm getting my state emitted. Nothing's showing up, though.
MostAwesomeDude: Of course, odds are quite good that that is because I'm not setting everything up right.
MostAwesomeDude: Wait a sec... I wonder if...
bridgman: no, no, don't pull on that, you don't know what it might be attached to
MostAwesomeDude: Hm, clearly, I don't.
MostAwesomeDude: Hmm. Well, I'm getting something-ish.
bridgman: something-ish is good
MostAwesomeDude: This board *really* enjoys locking up.
MostAwesomeDude: Anyway, before it locked up, I saw a large white point in the middle of the buffer.
MostAwesomeDude: Hehe. Before it dies, you see... *THE POINT*
MostAwesomeDude: I'm sure that if I set the color to the actual color, and adjust the point size --
MostAwesomeDude: Oh, c'mon, it wasn't *that* bad, was it?
MostAwesomeDude: Oh, c'mon, it wasn't *that* bad, was it?
bridgman: MostAwesomeDude; yes it was
MostAwesomeDude: Anyway, I think if I change the color and point size, I've got it.
bridgman: you clear the screen with a point ?
MostAwesomeDude: Yeah, same way that fglrx does.
bridgman: we clear the screen with a point ?
MostAwesomeDude: The maximum point radius / line width on Radeons is 10000+ pixels.
bridgman: that'll do... but I thought we used some Hi-z trick to clear the screen...
MostAwesomeDude: And that's only because we can't count past 10922 in those little registers. :3
bridgman: yeah, well you wanted metric registers
bridgman: actually it was the EU ;)
MostAwesomeDude: Yeah, when I get HiZ/HyperZ going, we'll want to find a way to do depth and stencil buffers differently.
bridgman: tries to remember back to when we introduced HiZ
bridgman: it was simple then
MostAwesomeDude: Dang, it went away. Hm.
MostAwesomeDude: Whoa, there it is!
MostAwesomeDude: Okay, so I've only one more bug to take care of.
MostAwesomeDude: Gotta set the color. However...
MostAwesomeDude: There's a race. The buffer's only swapped on resize, which leads me to think that it's a DRI2 bug.
MostAwesomeDude: Oh, and my motherboard likes to lock up when direct rendering's happening, but kerneloops assures me that it's my board and not DRM.
spstarr: as my virtualbox has just shown, do not install rawhide from scratch with ext4.. it wont boot ;)
MostAwesomeDude: Boots for me.
spstarr: you dont get an error?
spstarr: mount: error monuting /dev/root on /sysroot as ext4: no such file or directory?
MostAwesomeDude: Maybe it broke over the weekend?
spstarr: you used ext4?
spstarr: maybe :)
MostAwesomeDude: Yep, fresh install, all defaults.
MostAwesomeDude: Haha, awesome. "Patch available, not yet merged by Linus" for the oops from my mobo.
vehemens: MostAwesomeDude: Liked the Gallium3D charts. I'm assuming that TG doesn't have any EEs.
vehemens: I'm basing that on the picture.
MostAwesomeDude: vehemens: EE?
vehemens: Electrical Engineer
bridgman: yeah, the picture looks more artistic than engineering-y
bridgman: EEs only draw straight lines, even for circles
vehemens: The part that grabbed my attention was the power strip floating in the pool.
bridgman: ah yes, that was good; MAD would appreciate it
MostAwesomeDude: I'm still a bit lost.
bridgman: I'm talking about the diagram that shows APIs, then a slice of Gallium, then HW-specific backends, then another slice of Gallium, then OS-specific winsys things
vehemens: Water + Beer + Electricity
mjg59: The three food groups
bridgman: vehemens is talking about a photograph of a small swimming pool with an electric grill in the middle, connected by a series of extension cords culminating in a power bar floating on a couple of rubber flip-flops in the pool
bridgman: people are lounging in the pool
MostAwesomeDude: Clearly, I don't spend enough time around EEs?
bridgman: apparently not; you need to hang out with them outside of school, going to classes is not sufficient
MostAwesomeDude: I have heard much of this "social gathering" of which you speak.
bridgman: still my favorite electricity pics : http://www.abrasha.com/misc/women.htm
bridgman: it's like the linux plumbers conference without the linux and the plumbing
bridgman: I don't really see the point of it
spstarr: oh i think i know why
spstarr: maybe SATA driver not in ramdisk
spstarr: turns off SATA
vehemens: Based on the abrasha rating system, the pool is about a 2.
vehemens: Then there are the bonus points when more then one person is involved.
agd5f: osiris__: TEX_PTR and COL_PTR define the positions in the input stream
agd5f: COL_FMT and SEL_S/T/R/Q define the size
agd5f: er rather the size of the input stream is defined by it_count and ic_count
agd5f: all colors are 4 components
agd5f: colors and textures have separate input streams
agd5f: col_ptr is is per color not per component
agd5f: tex_ptr is per component
agd5f: well, same bus, but they are handles separately from the register's perspective
agd5f: osiris__: http://pastebin.com/m4f954f2d
agd5f: I think this should clarify RS
agd5f: at least for r3xx/r4xx. r5xx is similar but a bit more flexible
MostAwesomeDude: 'k. So I just pushed.
MostAwesomeDude: Anybody with an RV410 can pull and enjoy trivial/clear.
MostAwesomeDude: Actually, anybody with an RV410 and DRI2/KMS/GEM should pull and tell me whether or not they get a hardlock after resizing.
MostAwesomeDude: I think it's just me, though.
MostAwesomeDude: Will make it more flexible (more chipsets) later. Need sleeps now.
bridgman: MostAwesomeDude; congrats !
bridgman: agd5f; clear as mud... so a rasterizer instruction is really "something to rasterize", and you have up to 16 rasterizer instructions depending on how many colours & textures are being used in the fragment shader ?
bridgman: vap lets you pack only the stuff you're using into the outgoing stream; rasterizer instructions let you unpack it again ?
agd5f: well, feed and swizzle what you want
agd5f: for the pixel shaders
bridgman: tries to understand exactly what hardware resource we are saving by scootching everything down then expanding it again
agd5f: everyone loves swizzles
bridgman: yes, I was just having a Bermuda Rum Swizzle as we speak
bridgman: time to go read again ;)
agd5f: bridgman: the r480 RS document actually pretty good
mjg59: Does AMD have any sort of gpu/firmware communication via ACPI going on?
mjg59: Because I've just discovered that there's an entirely undocumented one for nvidia, to go with the documented Intel one