1. Advertising
    y u no do it?

    Advertising (learn more)

    Advertise virtually anything here, with CPM banner ads, CPM email ads and CPC contextual links. You can target relevant areas of the site and show ads based on geographical location of the user if you wish.

    Starts at just $1 per CPM or $0.10 per CPC.

Opera 12.17 is here...

Discussion in 'General Chat' started by deathshadow, Apr 25, 2014.

  1. #1
    This gives me at least SOME hope as they are at the very least doing security fixes for Opera 12 for the time being.

    Given what a pathetically useless piece of crippleware Opera 15/newer is by comparison, much less the halfwit idiocy of switching to FF/Chrome style version releases where there's a major revision number change every time there's a stiff breeze... It's nice to at least see some effort to maintain the REAL feature-complete Opera, as opposed to the steaming pile of manure known as Chrome with the Opera logo slapped on it any old way.

    Not that I'm calling Chrome pathetically useless crippleware... Oh wait, from a user interface perspective, YES I AM.
     
    deathshadow, Apr 25, 2014 IP
  2. digitalpoint

    digitalpoint Overlord of no one Staff

    Messages:
    38,334
    Likes Received:
    2,613
    Best Answers:
    462
    Trophy Points:
    710
    Digital Goods:
    29
    #2
    Now if only they would start using a modern protocol so it can actually get page resources faster than molasses. :)

    http://caniuse.com/spdy

    HTTP was created in and designed for computers in the late 80s. :)
     
    digitalpoint, Apr 25, 2014 IP
  3. deathshadow

    deathshadow Acclaimed Member

    Messages:
    9,732
    Likes Received:
    1,998
    Best Answers:
    253
    Trophy Points:
    515
    #3
    When you said that last time it kind of triggered my BS alarm but I didn't contest it at the time... thanks for linking to a compatibility table because... well... notice that it was supported as of 12.1 on that chart?

    Though honestly spdy is just more sweeping under the rug the real problem of dumbass developers who have no business building websites in the first place blowing a megabyte in several dozen files for single digit k's of actual content text and a half dozen content images; See these forums that even in Firefox to make it useful I have to block x.dpstatic.com to even TRY to get into the forums in a useful manner thanks to the 551k in 8 files of scripttard "javascript for nothing" -- entirely what one comes to expect the moment you see the fat bloated idiotic train wreck of "let's make it cryptic and harder to use while calling it easier and hoping placebo effect makes up for it" BULL known as jQuery.

    Because again, SPDY would probably serve no legitimate purpose if people didn't waste 778k in 21 files -- with a ridiculous 51k of markup -- to deliiver a mere 2.3k of plaintext and only two actual content images. (this thread before this post was added). Blocking jQuery and all the x.static crap guts it down closer to what it probably should be in the first blasted place -- around 120k in 12 files... though really there's no excuse apart from developer ineptitude for it to be more than HALF that... particularly since there's only

    IT's scripttard BS like that which makes me run scripting blocks, domain blocks, user CSS and a whole slew of other things to even come close to making usable the ineptly coded front-ends on forums like Xenforo, vBull, or phpBB.

    Never know whether to laugh or cry when I see three quarters of a megabyte doing 70k's job.
     
    deathshadow, Apr 25, 2014 IP
  4. digitalpoint

    digitalpoint Overlord of no one Staff

    Messages:
    38,334
    Likes Received:
    2,613
    Best Answers:
    462
    Trophy Points:
    710
    Digital Goods:
    29
    #4
    Oh, if Opera 12.17 supports SPDY, then you are all good as far as that goes.

    But... SPDY has nothing to do with website lost or trying to work around it. If something makes everything 50% faster, would you choose to not use it? That's like saying you want to stick to dial up because broadband is just a patch for bloated websites.

    As far as dpstatic pulling down a ton of data, it's absolutely not the case unless your browser is doing something *very* wrong... The reason dpstatic.com is used is *because* I'm a speed freak. And being able to pull down static resources (CSS, JavaScript, images, etc.) without sending useless cookie data with each request saves bandwidth for every request. In addition, anything you get from that domain, it instructs your browser to never even bother checking if the resource has changed until 1 year after your browser last checked (it doesn't make a bunch of HTTP requests checking if that static content changed).

    In fact, for this page view, I opened up the network monitor, and not a SINGLE HTTP request was made to dpstatic.com for anything (because I've visited the site within the last year, it knows to not bother looking for anything there again).

    As far as thinking jQuery shouldn't be used, it depends on who's using it, and for what purpose. If you get developers using it as a crutch to do animations (when they really should be doing CSS3 transforms instead), I'd agree with you. But if you are using it for the right reasons, it actually *removes* bloat because you don't need to write the same function (for example AJAX or DOM selectors) that are different for every browser. Thankfully new browsers are starting to be more uniform with how they handle things, so jQuery is becoming less necessary these days unless you want to support outdated browsers (like Opera 12 {cough, cough}... heh). Blaming bad UI or whatever else it is that bothers you about jQuery has nothing to actually do with jQuery. jQuery is a fantastic tool for eliminating bloat (again, you only need to write code once instead of 5x for 5 different browsers). Your hatred of jQuery sounds more like you should hate whatever developers made something you hate *with* jQuery... but by that same measure, shouldn't you hate HTML because some developers do some really stupid shit with it also? :)

    I've never tried to use this site with jQuery blocked, but I suspect it's actually going to make the site *slower*... again, no HTTP request is made after the first one to get jQuery itself (not even to check if it's been modified) unless your browser is doing something non-standard/crazy... and secondly, without AJAX support, simple things like replying to a thread is going to trigger an entire page load (unnecessary page loads).
     
    digitalpoint, Apr 25, 2014 IP
    ryan_uk likes this.
  5. deathshadow

    deathshadow Acclaimed Member

    Messages:
    9,732
    Likes Received:
    1,998
    Best Answers:
    253
    Trophy Points:
    515
    #5
    While it reduces handshakes and sends headers compressed, if it's delivering 50% you've done something wrong. Some of my sites are so small, it actually takes longer and increases overhead since it's HTTPS manditory -- SPDY is great if you "need" HTTPS, it makes it take LONGER if you don't since you introduce the overhead of encryption.

    A 'speed freak' would not be wasting 515k of javascript on a FORUMS. I agree that using a separate domain so you aren't wasting the overhead of header info like cookies is a good idea; my problem is with the fat bloated crap it pulls that sucks laptop and mobile batteries dry, and causes execution spikes when they run, AND make it take longer to actually RENDER the page. That's why I'm blocking x.dpstatic.com/j -- to get rid of the scripttard BS that just gets in the way of using the site.

    Assuming it doesn't get flushed to make room for other sites you visit... which for me I can blow out my browser cache inside a day easily, making all that dicking around with cache-control nonsense just more placebo BS to make up for developer ineptitude.

    I laugh when certain server owners even use that as a way to make mobile faster; I'm so sure that with flash mass storage and memory amounts a quarter or less that of your average desktop all that cache-control crap is actually going to be obeyed... RIGHT.

    In fact, for this page view, I opened up the network monitor, and not a SINGLE HTTP request was made to dpstatic.com for anything (because I've visited the site within the last year, it knows to not bother looking for anything there again).

    By itself -- compressed -- jQuery is half the total filesizes I have as an ideal target for an entire template uncompressed -- that's HTML+CSS+SCRIPTS+IMAGES not counting content. Uncompressed jQuery is 75% my upper limit for an entire PAGE FULL OF CONTENT compressed.

    Really if after gzip compression a developer "needs" more than 24k of javascript - even including any ajaxtardery - on a forums, they probably have no blasted business writing code for a forums.

    Though this:
    Is the typical fantasy-land BULL that makes me think you don't actually know enough JavaScript to make a sane and rational choice on using scripting; Oh noes, you might need one crappy little function to try/catch down the XML chain and another small function to provide a polyfill for querySelector/querySelectorAll -- which COMBINED shouldn't break more than 1.5k uncompressed -- that's SO worth a fat bloated 96k library that encourages slow/sloppy/needlessly and pointlessly cryptic methodology.

    The handful of things that take less code to be written without jQuery usually don't belong on a website in the first place or are CSS's job. Everything else can typically be written smaller without it or more efficiently/feature-rich at the same size. Take crap like the typical image rotator -- which in jquery I've seen range from 8 to 16k while relying on the library, for functionality that shouldn't even break 6k without it. Using my upcoming JS library that's meant to kick "framework" bullshit to the curb, I've got a full featured "slider" demo that only takes 3.5k with a 16k library (all numbers uncompressed)... I've seen form handlers to the tune of 16k or more that rely on that garbage HTML 5 BS 'data-' attribute crap (nothing like putting stuff in the markup that has no business in the markup) that doesn't equal the functionality of what I can do in 4k of scripting WITHOUT any library... and of course my totals is with actually generating the scripting only elements in the scripting and hooking onto elements instead of inlining scripting with attributes; which is why usually I end up with less markup in the process as well! ... and that's before we even talk graceful degradation.

    I've never seen anything done using jQuery that didn't fall into one of those three categories -- would be simpler/less code without the library (WITHOUT counting the library against that!), is CSS' job, or has no business on a website in the first damned place. That people are DUMB ENOUGH to buy the lie that jQuery is good, can only be attributed to the use of card stacking, testimonial, glittering generalities and bandwagon to create a placebo effect. Me, I go right for the plain folks, transfer and name calling.

    Also seems like convincing people of that is a uphill battle against cognitive dissonance akin to convincing a faith-tard that their holy writ ain't the least bit legit, it's a bunch of bull****.

    Always love the 'unneccesary pageloads' defense used by the Ajaxtards -- I'm not saying that using Ajax for that is bad, but if your markup and content is so bloated (like say... 52k doing 10k's job) that it makes a significant difference (given that everything else should be cached on a normal pageload) then you have done things horrifically and terrifyingly badly. It just reeks of the "pageloads are evil" paranoia spewed forth by people who don't understand enough HTML/CSS to write their page efficiently in the first place.... Like say (again to flog the deceased equine) 51k of markup doing 10k's job?

    https://x.dpstatic.com/j/xenforo/xenforo.js
    horrifically and terrifyingly badly...

    I'd eat a bullet before I'd allow 174k of scripttard bull that requires a 96k library to even function on something as ridiculously simple as a forums; where even with AJAX submits and alert polling there's no excuse to break 32k WITHOUT any 'framework'... But yeah, sure -- it makes everything 'faster' and 'easier' -- RIGHT. Tell me another one Josephine...

    All this stuff is just sweeping deep rooted issues under the rug and going "la la la la" Vancome lady style. Again, if you don't know what's wrong with wasting 55k of markup on 2.3k of plaintext and two content images, or 778k in 21 files for a thread load on a forums (something that probably shouldn't even break 70k in 8 files) -- then wonder why you need all these goofy tricks to "speed up the site" and constantly struggle with server load... shya, you know what? uh-uh...

    So excuse me if my reaction to calling yourself a "speed freak" is akin to Bill Cosby's Noah... RIGHT...
     
    deathshadow, Apr 26, 2014 IP
  6. deathshadow

    deathshadow Acclaimed Member

    Messages:
    9,732
    Likes Received:
    1,998
    Best Answers:
    253
    Trophy Points:
    515
    #6
    Just to illustrate what I mean by developer ineptitude and sweeping bad code under the rug:
    <nav>
    	
    		
    			
    				
    			
    		
    			
    				
    			
    		
    	
    
    	<fieldset class="breadcrumb">
    		
    			
    		<div class="boardTitle"><strong>Digital Point</strong></div>
    		
    		<span class="crumbs">
    			
    				<span class="crust homeCrumb" itemscope="itemscope" itemtype="http://data-vocabulary.org/Breadcrumb">
    					<a href="//www.digitalpoint.com/" class="crumb" rel="up" itemprop="url"><span itemprop="title">Home</span></a>
    					<span class="arrow"><span></span></span>
    				</span>
    			
    			
    			
    				<span class="crust selectedTabCrumb" itemscope="itemscope" itemtype="http://data-vocabulary.org/Breadcrumb">
    					<a href="https://forums.digitalpoint.com/" class="crumb" rel="up" itemprop="url"><span itemprop="title">Forums</span></a>
    					<span class="arrow"><span>&gt;</span></span>
    				</span>
    			
    			
    			
    				
    					<span class="crust" itemscope="itemscope" itemtype="http://data-vocabulary.org/Breadcrumb">
    						<a href="https://forums.digitalpoint.com/.#the-digital-point.1" class="crumb" rel="up" itemprop="url"><span itemprop="title">The Digital Point</span></a>
    						<span class="arrow"><span>&gt;</span></span>
    					</span>
    				
    					<span class="crust" itemscope="itemscope" itemtype="http://data-vocabulary.org/Breadcrumb">
    						<a href="https://forums.digitalpoint.com/forums/general-chat.19/" class="crumb" rel="up" itemprop="url"><span itemprop="title">General Chat</span></a>
    						<span class="arrow"><span>&gt;</span></span>
    					</span>
    				
    			
    		</span>
    	</fieldset>
    </nav>
    Code (markup):
    HTML 5 bloat redundant to heading navigation (that stupid malfing nav tag), endless pointless classes for nothing, that stupid 'itemprop' code bloat that nothing actually uses, absolute URI's for no good reason, little if anything resembling semantic markup with the breadcrumbs effectively a run-on sentence since there are no actual block level tags, content cloaking and/or STRONG doing a heading's job (since that's a non-render element, it's the former!)... and what the blue blazes makes a section of code with no INPUT, BUTTON or TEXTAREA be a FIELDSET? REALLY? HERPAFREAKINGDERP.

    It just reeks of "semantics, what's that" and HTML5-tardery akin to the old microformats junky data scraping BS. There is NO legitimate excuse other than developer ineptitude for that section of code to be anything more than:
    <ul class="breadCrumbs">
    	<li><a href="//www.digitalpoint.com">Home</a></li>
    	<li><a href="/">Forums</a></li>
    	<li><a href="/.#the-digital-point.1">The Digital Point</a></li>
    	<li><a href="/forums/general-chat.19/">General Chat</a></li>
    </ul>
    Code (markup):
    Since everything else can either be done from the CSS, is 'wishful thinking aria roles' bullshit that NO UA gives a flying purple fish about, or just plain has no business in the markup in the first place. That's 1.5k vs. 0.25k that in 99.999% of usage scenarios is 'close enough'; actually, the latter is more useful to people on screen readers since it's not going to come across as a run-on sentence.

    ... and that's basically what the AJAX-tardery, JS for nothing, and goofy server tricks are trying (and failing) to sweep under the rug, letting the server take longer to build the page's content and waste time string processing every time there's a pageload.
     
    deathshadow, Apr 26, 2014 IP
  7. digitalpoint

    digitalpoint Overlord of no one Staff

    Messages:
    38,334
    Likes Received:
    2,613
    Best Answers:
    462
    Trophy Points:
    710
    Digital Goods:
    29
    #7
    "Overhead os encryption" is something very old. These days, there isn't any measurable overhead. Any CPU manufactured in the last 6 years has the AES instruction set built in, which basically offloads all the encryption/decryption from the CPU. Even the initial SSL handshake isn't something that can be noticed with modern browsers. And also, regarding dpstatic.com, it shares the same encryption stream as digitalpoint.com (meaning it doesn't even need to negotiate a new connection when using it, as you can see in this screenshot).

    upload_2014-4-26_8-41-59.png

    If you want to try a raw test of a if a static document is retrieved faster with HTTP or HTTPS, try this:

    http://whichloadsfaster.com/?l=http://dpstatic.com/ad.js&r=https://dpstatic.com/ad.js

    It's a static document, so you have no overhead at the application level that could vary results (no PHP or database access). It's the same server, same document, etc.

    I just tested it with the "Repeat" option (100 times), and overall the HTTPS version was on average 6% faster.

    upload_2014-4-26_8-45-20.png


    Not sure where you are getting the 515k number, but it's not right.

    I just cleared my cache and loaded the main forum page (which has more JS than normal because of the chart it shows). Pulling the web server logs (which are of course going to be exactly accurate), this is what it shows:

    [26/Apr/2014:08:47:44 -0700] "GET /j/xenforo/xenforo.js?_v=3505c6f9 HTTP/1.1" 200 54436 "https://forums.digitalpoint.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36"
    [26/Apr/2014:08:47:44 -0700] "GET /j/digitalpoint/highcharts/highcharts.js?_v=3505c6f9 HTTP/1.1" 200 55811 "https://forums.digitalpoint.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36"
    [26/Apr/2014:08:47:44 -0700] "GET /j/digitalpoint/highcharts/highcharts-more.js?_v=3505c6f9 HTTP/1.1" 200 8496 "https://forums.digitalpoint.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36"
    [26/Apr/2014:08:47:44 -0700] "GET /j/digitalpoint/twitter_feed.js?n=0 HTTP/1.1" 200 2189 "https://forums.digitalpoint.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36"
    [26/Apr/2014:08:47:44 -0700] "GET /social.js?u=https%3A%2F%2Fforums.digitalpoint.com%2F&r=digitalpoint HTTP/1.1" 200 2675 "https://forums.digitalpoint.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36"
    Code (markup):
    With a flushed cache, the entire page takes 0.5 seconds to render (without a flushed cache it's even faster since it doesn't need to get the JS, images or CSS. But either way, if you do the math on the 5 JS files it retrieves for that page, it's 123.6KB, plus 32.8KB for jQuery, so a grand total of 156.4KB. You could of course argue that jQuery is needless, but the reality is that if you want to support anything other than the latest version of each browser, there would be more overhead *without* jQuery.

    And of course that 156.4KB will not load on every page view... only when you flush your browser cache or 1 year from the time you got it (whichever comes first).


    Has nothing to do with developer ineptitude... if that's the ONLY thing the developer is doing to make their site faster, then yes... I'd agree. But when you combine that with hundreds of other things to make a site faster, there really is no downside to using it when used properly.

    As far as browsers not "obeying it", you are wrong there... looking at the web logs, if you take out first time visits by users (for any time frame, even if that first visit was 6 months ago), there's only a 0.2% rate on this site that JavaScript, CSS or image files are downloaded for a page view. This is based on a sample size of ~75M page views.

    This site is much more than a forum... there are all sorts of functions in the main JS that have nothing to do with the forum (intentionally since that main JS is normally only loaded once, and we don't need to tell the browser to load additional JS for other things later). For example, the main JS includes stuff for the tools are, advertising platform, marketplace, etc. Based on the percentage of users that use those areas, it's more efficient to load it all up into a single JS that is only loaded once per year.

    It's not quite that simple... you have people using old versions of browsers (for example you). Prior to versions coming out about a year ago, every browser works differently for pretty much everything... JavaScript, AJAX, CSS, etc. When the number of users using the old browsers drops to a negligible level, we probably *will* get rid of jQuery, because at that point, it starts to serve very little purpose when you can actually write code/CSS that works properly on the majority of browsers without rewriting it.

    You don't need ANY JavaScript to rotate an image properly with a modern browser. CSS3 transforms/transitions require no JS and are hardware accelerated, so it's smother as well (if doing animations with it).

    Literally all you need to do is apply this CSS to the object:
    transform: rotate(90deg);
    Code (markup):
    Say you wanted to just make it spin 360 once over 2 seconds, it's this:
    transform: rotate(360deg);
    transition: all 2s;
    Code (markup):
    My point is that I'm in absolute agreement with you that JavaScript is a terrible system (jQuery or otherwise) to use for animations when it can be avoided (99% of the time it can be).

    But again... I tend to agree with you that at some point in the not so distant future, jQuery will not be necessary unless you want to maintain backward compatibility with very old browsers (I'm good with not supporting browsers older than 3 years myself). In fact, some of the functions in our JS is laying the framework for getting rid of jQuery (for example writing our own functions that jQuery has built-in and using those now instead of it being a clusterfuck when jQuery goes away).

    It's absolutely the case that the overhead of using jQuery is less than the overhead of not on this site. For CSS it means you don't need to bloat your CSS 4x over with idiotic browser-specific prefixes (-o, -ms, -moz & -webkit). Just the CSS savings alone on this site make it worth it or at least break-even. But then you add the fact that you can support browsers older than 1 year without rewriting every major JavaScript function to support the differences/intricacies of each browser.

    I suspect it's because you are looking at jQuery only as you would use it or how you see poorly designed sites use it. Just because a shitty site uses HTML improperly, it doesn't make HTML inherently bad across the board. The same argument applies to jQuery (or ANY framework really). Like I said, I'm looking forward to when browsers are universal enough that jQuery isn't a good thing, but right now that's just not the case.

    I'm not saying pageloads are evil... but they aren't always necessary. I don't care if a page load took 1KB and 0.1 seconds... that's 1KB and 0.1 second that isn't needed.

    Like I mentioned above, that has TONS of stuff that have absolutely nothing to do with the forum part of the site. We intentionally "overload" the primary JavaScript because most users use other parts of the site and the file only needs to be downloaded once (and it's 54.4KB of actual data transfer). The other option is breaking it apart into 10 or so different JavaScript files for different parts of the site. But then you are adding overall overhead on average (the underlying HTTP request to get each item).

    Your math is flawed however you look at it... if you look at the raw data throughput on the router... the last 5 minutes this is what it's looked like... 5.1Mbit average over those 5 minutes. In that same 5 minutes, there has been 13,183 page views, so ~44 per second. That works out to 118.7kBit per page view... which ultimately breaks down to each page view is consuming 14.83KB per page view on average overall. If each page view was consuming on average of 778KB, you would need multiple T3s... specifically 267.55Mbit connection to support the site just on a weekend. And would be closer to 600-700Mbit for a weekday. So your math is flat out not right...

    I know you can argue that you are counting uncompressed size (what your computer does with it after it retrieves the file), but if you are going to make that argument, it's going to be much, MUCH bigger than 778KB, because you should be factoring in the memory that your web browser uses to render a page as well. That 14.83KB average of *actual* transfer per page view could really be more than 100MB depending on your browser and how "un-memory efficient" it is to render a page.
     
    digitalpoint, Apr 26, 2014 IP
  8. digitalpoint

    digitalpoint Overlord of no one Staff

    Messages:
    38,334
    Likes Received:
    2,613
    Best Answers:
    462
    Trophy Points:
    710
    Digital Goods:
    29
    #8
    There is a lot of stuff that uses microdata/semantic markup... Ever hear of Google? :)

    Breadcrumb data is used in Google search results... for example this thread:

    upload_2014-4-26_9-42-28.png

    Microdata obviously isn't intended for humans... it serves no purpose to a user directly viewing a page.

    It's also handy for marketplace items in Google. For example:
    http://www.google.com/webmasters/to...nt.com/sphinx-search-for-vbulletin-4.870/item

    While, I'd love to just ignore Google, if you really wanted to do that, you should just block it via robots.txt and forego ~100k visitors per day from it.

    Microdata is also used by things like Facebook and Twitter for various things that ultimately drive a huge amount of traffic to the site (whether or not you think Facebook or Twitter are worthwhile websites is irrelevant... personally I don't really use Facebook, but I'll definitely take their traffic). :)
     
    digitalpoint, Apr 26, 2014 IP
  9. deathshadow

    deathshadow Acclaimed Member

    Messages:
    9,732
    Likes Received:
    1,998
    Best Answers:
    253
    Trophy Points:
    515
    #9
    Just because there are opcodes for it doesn't mean it doesn't leave that execution pipe hanging... Where the blue blazes did you get the idea that's how that works?!? Sure, averaging 3.5 clocks for every DWORD (or QWORD x64) over the stock 28 (p4) to 48 (AMD, Atom, P3) 'brute force' code is a massive improvement, but given how much of that is balanced out by caching and memory controller prefetch much less the memory bus bottleneck, it ends up something of a wash... at least until memory bus stops being anywhere from one-half to one-fifth the cpu clock. (though it does make WRITING encryption code WAY easier at the machine language level). Of course, that bus disonnance is why the 12 core AMD's in super-ghz land (1 to < 2ghz) can be more attractive than a hyper-ghz (2ghz+) quad Xeon depending on what you are doing for server activity.

    Interesting, as that's not what the waterfall in Firebug or Dragonfly is showing...

    Which on 3 runs after a cache flush is saying 75% faster for HTTP here, and subsequent attempts come in at 50% faster for HTTP... 100 runs comes in at a nice 7 times faster for HTTP, so ... nice way to shoot down your own argument. :D

    http://www.cutcodedown.com/for_others/digitalPoint/HTTPIsFaster.png

    Cute though how after a cache flush on only three runs, it's averaging 512ms... but on a 100 run (flush or no) It's coming in at 66ms or less. NOT that there's sufficient timer accuracy in the process to trust any number smaller than 66ms... but that's JS for you.

    Have to ask, what browser? I'm not getting that in Opera, FF or Chrome so... Of course, it could be you're sitting on top of your server (or a major pipe to it) and I'm not... that's why tools like that one are 'flawed' as without de-regionalization, it can lead you to false assumptions.

    Firefox, web developer toolbar, information -> document size.

    ... and it's probably not right, as there are several files linked to in the markup not showing up on the list, so it's likely far more than the 515k that tool reports; though it's completely inline with the waterfall... though it also might be getting confused since it seems to be downloading ads.lfstmedia.com/getad?site=232235 twice... and it's NOT caching either in FF since it has a query; the trap of that stupid "put getdata at the end of your static includes" method.

    Or you run out of cache memory because you actually *SHOCK* visit websites other than this one, and the browsers cache manager starts flushing things regardless. Let's face it, the 30-40 megs of memory cache and 128-256 megs of disk cache limit (or even less on mobile, particularly since they don't cache to flash since they don't want to burn it out prematurely) doesn't go very far these days... hence why the oldest cached item on my phone is from 3 days ago, and on this laptop (which has never seen a flush -- as opposed to my workstation which is flushed too often to use it for testing this type of thing) is 5 days ago.

    Excepting I've never seen anyone bother trying to make the site faster by actually fixing what's REALLY wrong, which can deliver 5x or more the benefits.

    I would REALLY be questioning those numbers; cache limits alone should be preventing that from happening unless the majority of those UA's are bots not retrieving anything but the markup. Cache-control is a cute idea, but unless you are allocating hundreds of megabytes of RAM and disk space as cache or people visit only one or two websites a day, it's a pipedream.

    Though caching servers provided by ISP's may be picking up the slack. That happens a lot... in which case you aren't seeing the whole picture.

    Normally I agree with that thinking, but to me doing so would hinge on how many people are going to those other parts of the site -- I may have a skewed viewpoint on that since I've don't visit anything but the forums on a daily, or even monthly basis.

    Uhm... BULLSHIT? Let's use XMLHTTPRequest as an example:

    function makeAjax() {
    	if (window.XMLHttpRequest) return new XMLHttpRequest();
    	function axObj(n) {
    		try { return new ActiveXObject(n); } catch (e) { return null; }
    	}
    	return 
    		axObj('Msxml2.XMLHTTP.6.0') ||
    		axObj('Msxml2.XMLHTTP.3.0') ||
    		axObj('Microsoft.XMLHTTP');
    }
    Code (markup):
    From that point on it's functionally identical; NO differences worth even mentioning.

    In terms of CSS, you use it properly you shouldn't be needing to do ANYTHING with scripting...

    Unless of course you count ignorance of how to use JavaScript or CSS properly -- which of course describes 99.999999999999999999999999999999999999999999999999999999% of the folks who seem to think jQuery serves a legitimate purpose other than pissing all over the Internet.

    Bwhaahhahaa... the one time I say rotator instead of slideshow -- and you didn't know what I meant. The term 'rotator' for images has come to NOT mean rotating AN images, but switching between a LIST of images or other content. You know, like banners and so forth?

    Google it:
    https://www.google.com/search?q=image+rotator+javascript

    Or as I just wrote in the documentation of the demo in my soon to be released library, "eFlipper - A full featured banner rotator / carousel / flipper / slideshow / slider / whatever the blue blazes they are calling them this week."

    Which are thankfully going away in Gecko and Blink... though really if the handful of cases you would need that actually impacts your overall CSS size, you are probably using more CSS than you need to. Admittedly, most people use two to three times as much CSS as they need to, just as they use three to five times as much markup for no legitimate benefit.

    NOT that I would use scripting to try and fix that; since that's just stupid bull that wastes bandwidth for nothing and is the antithesis of graceful degradation... but I have this "people don't get linear gradients, box shadows and rounded corners on legacy browsers, OH WELL" attitude towards it.

    Though I always find it a laugh when people list these massive libraries as "css savings" or "simpler scripting" when I'm most always using anywhere from one-half to one-tenth the code WITHOUT them... with NOT counting the size of those frameworks against it -- and that more than anything else is what makes me call bullshit on things like jQuery, LESS, SASS, OOCSS, blueprint, bootstrap, etc, etc... It always leaves me asking "Savings!?! WHAT SAVINGS?!?"

    ... and when fat bloated json or XML delivery is usually the same size (or sometimes even larger)... Though at least you didn't use the thing the die-hard AJAX-tards claim about it 'saving handshakes'; I've had a few people say that and it's like "Ok, we're done here moron."

    To me it's just one of those things where if MAYBE people didn't have markup to content ratios of 20:1 they wouldn't need to throw even MORE code at it. Unless you're unrolling loops in machine language, if someone tells you using more code is going to be faster or better your BS alarm should be ringing.

    Except we're not just talking transfer; you're also talking execution... and compression/decompression.

    ... and with bloated sites creeping up to a megabyte to deliver single digit k of plaintext and single digit numbers of images, with scripting who's filesizes are two to five times larger than the images of the theme; well... if you don't know what's wrong with that...

    Particularly when it's typically 500k or more scripting doing the job of 20k... ALL of which has to be parsed (and much of it run) before the page can even render? Hence that wonderful 2 second delay on sites like this scripting enabled before you dare click on anything?

    Which they used to do just fine without the microdata... Of course that in your example it's masking the URL is part of why it's so easy to abuse in the long term, which is why I'd not be surprised to see an update that slaps down sites for using it sooner than later.

    But my real problem with that crap is twofold...

    First, what the hell happened to "Code for the users, not the engine?" -- of course your SEO-tard scam artists nudniks defend this bloated idiotic halfwit bull saying things like "Matt Cutts doesn't know what he's talking about"; which of course is WHY I consider the people promoting this crap to be scam artist as who would be their biggest enemy? That's right, the guy in charge of the part of Google that slaps people down for abuse.

    Though admittedly, Google right now has a 'left hand doesn't know what the right is doing" thing going on... Their marketing and advertising scamtards are in direct conflict with their anti-spam and search efficiency experts. Makes reading anything by Google feel like it was written by someone with multiple personality disorder.

    But still this:
    Is the antithesis of how we've been told to write websites for a decade, no matter how much the black hat SEO-tards want it to be otherwise.

    Depends on who at Google you're listening to... It working on Google being in conflict with what their anti-spam people and webmaster guidelines have been telling us; sadly the webmaster guidelines seeming to have recent changes more geared towards scamming people than helping make sites better -- see their entire "pageSpeed" bullshit where crappy bloated idiotic websites get a 50% speed boost, while sites I've written profile 20% slower with their nonsense.

    My second issue with it though is FAR more deeply rooted; it's the implementation. The breadcrumb one is a GREAT example of this as apparently what they are telling us is that search engines are too stupid inside a breadcrumbs list (which semantically SHOULD BE A LIST) are too stupid to realize that a anchor might actually be the URI and the text inside the anchor might actually be the title... Think about that -- it's REDUNDANT TO THE BLOODY MARKUP!!!

    If that doesn't make your teeth hurt... Seriously, why the hell does it need all that garbage to say "these are breadcrumbs" -- If they had ONE attribute on a UL, then was smart enough to realize that the anchors are the links and the text inside the anchors is the titles for those links, I'd be fine with it... but that implementation? OH HELL NO! -- Dir mir est loch in kop...

    Microformats have always pissed me off as being pointless code bloat and scam artist shmendrik; sooner or later you have to let the content do it's job or we'll end up within a decade having idiocy like new tags in HTML 6 for VERB, NOUN, PRONOUN, etc... (which admittedly with the halfwit idiotic HTML 5 article, section, nav, aside, header and footer crap, we're already halfway there)

    Pisses me off even more when they expect you to put the URL to their garbage rules on every blasted item; it's like they don't grasp what semantic markup, inheritance, or DOM structure is for, and just like the OOCSS and jQuery retards feel the need to 'throw classes at EVERYTHING' -- instead of classes they're using those stupid extra redundant attributes. Why you can't just say it once on the wrapping parent is ... well, really jacktarded code bloat.

    The laugh being the number of site owners who go all meshugga about content theft, then dope their sites to the gills with this microdata crap that makes site scraper's jobs easier. Search engines have worked just fine for DECADES without any of this nonsense -- and adding this bekaptah kakamame drek just makes the web slower, sleazeball content theives lives easier, and basically throws semantic markup and content as the important part right out the window.

    In many ways it just boils down to "why not just let semantic markup and the entire reason HTML even exists do their job?"

    But again, if you don't see what's wrong with using 51k of markup and half a megabyte of scripting to deliver pages with single digit K of actual content on them -- we'll have to agree to disagree as we'll never see eye to eye on it.

    Though - even if you wanted to keep the microdata crap, it doesn't explain all the classes, span and div for nothing :p
    <ul class="breadcrumb">
    	<li itemscope="itemscope" itemtype="http://data-vocabulary.org/Breadcrumb">
    		<a href="//www.digitalpoint.com/" itemprop="url">
    			<span itemprop="title">Home</span>
    		</a>
    	</li>
    	<li itemscope="itemscope" itemtype="http://data-vocabulary.org/Breadcrumb">
    		<a href="/" itemprop="url">
    			<span itemprop="title">Forums</span>
    		</a>
    	</li>
    	<li itemscope="itemscope" itemtype="http://data-vocabulary.org/Breadcrumb">
    		<a href="/.#the-digital-point.1" itemprop="url">
    			<span itemprop="title">The Digital Point</span>
    		</a>
    	</li>
    	<li itemscope="itemscope" itemtype="http://data-vocabulary.org/Breadcrumb">
    		<a href="/forums/general-chat.19/" itemprop="url">
    			<span itemprop="title">General Chat</span>
    		</a>
    	</li>
    </ul>
    Code (markup):
    STILL basically half the code retaining that pointless schema-bloat. ... and DAMN those inner span and massive URL for nothing to the schema source piss me off.

    Oh, and rel="up" is supposed to be up one, not up four... NOT that I believe in using REL on anchors in the first place as it too isn't useful to the user -- and we're SUPPOSED to be coding for the user, not the engine; at least, unless you WANT to go knee deep into SEO scam artist shvantz-land.

    NOT that 'up' is a recognized REL to begin with -- but it's not like that's written in stone with the so called "specifications" being toothless and now thanks to HTML 5, not even bothering to be authoritative. Still though -- it's part of why things like REL on anchors and the various 'itemscope' type schema crap leaves me wondering "do the people behind this even understand what HTML is or how to use it?!?"
     
    Last edited: Apr 27, 2014
    deathshadow, Apr 27, 2014 IP
  10. digitalpoint

    digitalpoint Overlord of no one Staff

    Messages:
    38,334
    Likes Received:
    2,613
    Best Answers:
    462
    Trophy Points:
    710
    Digital Goods:
    29
    #10
    Okay, okay... you win. :) These replies back and forth are getting way too long. lol
     
    digitalpoint, Apr 27, 2014 IP
    wiicker95 and Spoiltdiva like this.