Announcement

Collapse
No announcement yet.

"Pick and Top of the Pops" No 1's

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #76
    To clarify, it appears my mistake was to presume RR looked at the previous week to identify the trend - not to give weight to the previous week's position.

    If RR had used a 'trend' method, that would have aligned it more with the others and thereby produced less tied number ones for the BBC chart (and improved the RR chart). But by RR using a method that created drag, I think the BBC would have been justified in using a trend method themselves to counteract this.

    I also wonder if Chinnery in 1968 was so frustrated with the system that he was rather glad there was a 3-way tie as it helped him to argue for a change.

    Comment


    • #77
      I've put in a question to Alan Smith to get him to clarify how Record Retailer specifically broke tied positions, I think this is going to be very interesting. It's looking like the RR chart positions were based on an average of record shop rankings (not total sales) + a tiebreaker based on the % change of sales week over week. More on that later in a future post.

      Meanwhile, I did a quick check of BBC ties on their POTP/TOTP charts. For 1964, there were 28 ties in the Top 20 over various weeks, one of them a triple tie.

      For 1968, the BBC had 35 ties in the Top 20, including the famous 3 way tie at #1, and another 4 way tie at a lower position.

      ------------------------------------------------

      Here's what Dave Taylor / Trevor Ager discovered about BBC chart ties, this info was in their BBC chart file that some of us have. While the BBC showed most ties as they occurred, they sometimes broke ties as well. As I mentioned in a post above, they sometimes broke ties at the last chart position #20/#30, and sometimes but not always for position #1. But they occasionally broke other ties as well. Here are the 'at times' complicated rules that Derek Chinnery and Denys Jones put together:

      A note on tied positions… When Derek Chinnery first added the positions together, for an averaged chart, he awarded a tie to positions, if they added up to the same amount. For example, he would simply add the positions together, from the published music charts, e.g. take this chart, from 8 Aug 1959…

      . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NME . RM . .MM . DISC . . . . . TOTAL . . . . BBC
      LIVIN` DOLL. . . . . . . . . . . . . . . . . . . . . . . . . . 1 . . . 1 . . . 1 . . . 1 . . . . . . . . 4 . . . . . . . . 1
      DREAM LOVER. . . . . . . . . . . . . . . . . . . . . . . . 2 . . . 2 . . . 2 . . . 2 . . . . . . . . 8 . . . . . . . . 2
      BATTLE OF NEW ORLEANS . . . . . . . . . . . . . 3 . . . 3 . . . 3 . . . 3 . . . . . . . .12. . . . . . . . 3
      A BIG HUNK O` LOVE. . . . . . . . . . . . . . . . . . 4 . . . 4 . . . 4 . . . 4 . . . . . . . .16 . . . . . . . 4
      A TEENAGER IN LOVE – MARTY WILDE. . . . 5 . . . 5 . . . 5 . . . 5 . . . . . . . .20 . . . . . . . 5
      LIPSTICK ON YOUR COLLAR. . . . . . . . . . . . . 5 . . . 6 . . . 6 . . . 6 . . . . . . . .23 . . . . . . . 6
      ROULETTE. . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 . . . 7 . . . 7 . . . 7 . . . . . . . .28 . . . . . . . 7
      PETER GUNNE/YEP. . . . . . . . . . . . . . . . . . 11/18. . 8 . . . 8 . . . 8 . . . . . . . .35 . . . . . . . 8
      PERSONALITY – ANTHONY NEWLY. . . . . . . 8 . . .10. . . 9 . . .12 . . . . . . . .39. . . . . . . 9
      RAGTIME COWBOY JOE. . . . . . . . . . . . . . . 12 . . .11. . .14 . . 11 . . . . . . . .48. . . . . . .10
      HEART OF A MAN . . . . . . . . . . . . . . . . . . . . 15= . . 9 . . 11 . . 15 . . . . . . . .50. . . . . . .11
      LONELY BOY . . . . . . . . . . . . . . . . . . . . . . . . . 9 . . 18 . . 10 . . 14 . . . . . . . . 51 . . . . . .12
      IT`S LATE. . . . . . . . . . . . . . . . . . . . . . . . . . . 10 . . 12 . . 19 . . 10 . . . . . . . . 51 . . . . . . 12
      SOMEONE . . . . . . . . . . . . . . . . . . . . . . . . . . 12 . . 13 . . 12 . . . - . . . . . . . . 58 . . . . . . 14
      GOODBYE JIMMY GOODBYE . . . . . . . . . . .14 . . 15 . . 20 . . . 9 . . . . . . . . 58 . . . . . . 14
      I KNOW . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15= . 14 . . 16 . . 18 . . . . . . . . 63 . . . . . . 16
      POOR JENNY. . . . . . . . . . . . . . . . . . . . . . . . . - . . .17 . . 13 . . 13 . . . . . . . . 64 . . . . . . 17
      THREE STARS . . . . . . . . . . . . . . . . . . . . . . . . - . . .20 . . 17 . . 16 . . . . . . . . 74 . . . . . . 18
      TWIXT TWELVE AND TWENTY. . . . . . . . . . . - . . .16 . . 18 . . . - . . . . . . . . 76 . . . . . . .19
      PERSONALITY – LLOYD PRICE . . . . . . . . . . - . . . 19 . . 15 . . . - . . . . . . . . 76. . . . . . . 19
      TEENAGER IN LOVE – CRAIG DOUGLAS . 15= . . - . . . - . . . . - . . . . . . . . 78
      A FOOL SUCH AS I . . . . . . . . . . . . . . . . . . . . - . . . - . . . - . . . .19 . . . . . . . .82
      I`VE WAITED SO LONG . . . . . . . . . . . . . . . . - . . . - . . . - . . . .17 . . . . . . . .80
      SIDESADDLE . . . . . . . . . . . . . . . . . . . . . . . . 19 . . - . . . - . . . . - . . . . . . . . . 82
      MAY YOU ALWAYS – JOAN REGAN. . . . . . 20 . . - . . . - . . . . - . . . . . . . . . 83
      I RAN ALL THE WAY HOME – IMPALAS. . . . - . . . - . . - . . . . 20. . . . . . . . 83

      So, Livin` Doll, was the average number one, on 4 points, Dream Lover number two, on 8 points and so on. If a record wasn`t in one of the charts, e.g. Lloyd Price wasn`t in NME or Disc, that week, then the record was given 21 points for NME, and 21 points for Disc, hence a total of 76 points, including 19 in Record Mirror and 15 in Melody Maker. Making Lloyd, tie with Twixt Twelve and Twenty, the Pat Boone record. If there, was a tie in the total in an averaged chart at number 20, and one of those records involved in the tie, was going down the chart, then the record going down, was not included as a joint number 20, and was dropped from the average chart. If two, or more records, were tied at 20, and were going down the chart, then the one, that was lower down the chart, the previous week, was not included, in the tie, and was also dropped, so reducing, the number 20 record, to just the one record. If there was more than one new entry, in a tie at number 20, then there was a joint number 20, included in the chart.

      If three records were involved, in a tie anywhere in the chart, and one of the records, was going down, then the record going down was moved a position further down, and not included in a tie. From, Spring 1965, if one record was going up, and another going down in the 11 – 30 positions, and tied, then the two were separated, with the record going upwards, given the higher position. In the top ten, a tie was allowed if one record was up, and one going down, but only if the falling record was only down one place, from the previous week. If the falling record was going down two, or more places, then it was then separated, from a tie, and put a place down i.e. tied number seven record going down, place at number 8. Though this was inconsistent.

      From September 1962, Melody Maker and Disc, both were printing top 30s, in their papers, so, an averaged top 30, was worked out, instead of just a top 20. The same rules, as previously discussed were used, but also now records, in a tie at number 30, were also treated, like the ties at a number 20. Although only a top 20, was still used by BBC chart shows.

      At the BBC, Denys Jones, took over the compilation from Derek Chinnery on 17th Dec 1966. Denys had slightly different rules, and used ties, whatever at first, hence the tie at number 20, in Dec 1966. He later, from 22nd Aug 67, only bothered with a tie if all records involved, were new entries. He changed this again, on 2nd Apr 68, and included a tie, for records going up, as well, hence the 4 way tie, at number 11, later that month, and of course, the 3 way tie, at number 1, in August 68. He might have done better, to stick with only having a tie with new entries. Taking, into account, the number of shops, used in compiling Melody Maker`s chart, it looks very likely, that Herb Alpert, would have most certainly, been the biggest seller, on the 27th Aug 68. The order, in which the ties were played, on the actual chart, were based on the previous week`s position, i.e. Herb Alpert, was played at the end of the chart, because he was number 3, on the 20th Aug, the Bee Gees, were played before that (they were number 7, on the 20th), and the Beach Boys, played before them, they were number 8, on the 20th. (But it was a mis-calculation, in truth, the Beach Boys should have been at number 5, on the 20th both Dusty Springfield and Herman`s Hermits should have joint 6, and the Bee Gees should have been number 8). So, really they could have been separated, to give a better picture. Denys had already separated the Herd, and Bobby Goldsboro on 7th May 68. They tied on points, but he took their individual positions, into account. Bobby, was number 6, in both NME and MM, that week. While, the Herd, were at number 8, in both, and slightly higher in RR. He also applied those tactics, on the 11th June 68, (when on points) Donovan, Dionne Warwick, and the Love Affair tied. Again, these were separated to numbers 7, 8, and 9. Towards the end of the averaged charts, if one record stayed the same, and one went up, and tied on points, then they were separated.

      ------------------------------------------------

      That is one crazy zoo! The BBC should have kept it simple and left ties as ties.

      Or, they could have used 'all charts equal' peak hierarchies, where a 9-10-14-18 record would beat a 9-11-15-15 record. You'd almost never have a tie that way.

      Or, they could have used hierarchies, and broken ties by which chart sampled the most number of record shops. In the mid 60s a #1 on Melody Maker would beat a #1 on NME, then Disc, then Record Retailer. The more samples, the more representative / the more accurate the chart. And again, ties would be rare.

      Either of the above would have been much easier than calculating averages and applying crazy tiebreaker rules.

      As always, a huge thanks to Alan Smith, Dave Taylor, and Trevor Ager !!

      Comment


      • #78
        Great information!

        Taking your suggestion of using hierarchies and applying it to the chart companies, couldn't they have just identified the largest shop in their sample that week and used it as the tiebreak decider?

        Comment


        • #79
          It's amazing that the BBC took a simple concept - average out the published charts to compile a "chart of charts" - and seemingly managed to turn it into something which appears unecessarily complex.

          Dave Taylor did once mention to me that the averaging out was done by simply adding together the positions titles were placed at on the various charts and where a record appeared on some but not all charts, using position 21 on those charts where the record was absent. I think I asked Dave why the simple methodology of just using the inverse points system (i.e. 20 points for a record at number 1, 19 points for number 2 etc) couldn't have been used instead as it saved adding in a number 21 position on a chart where a record didn't appear. He didn't know why this wasn't used. Perhaps it wasn't an obvious way of doing it back then though I'd always thought each music paper compiled it's own charts by using the inverse points system on the best selling list of records that each record shop provided.

          Comment


          • #80
            Giving 21 points to an 'absentee' boosted its position. You were effectively saying: "I don't know how many points above 20 this should get, so I shall give it the minimum."

            However if you don't give it anything you are boosting it even more. In the example BBC chart 'Someone' would only have 37 points and be at number 9 instead of number 14.

            Using the inverse points system would have been better. 'Someone' would have become untied at number 15.

            But what you really need is more information about the 21+ positions. I think the chart companies themselves erred in this; for example, to compile a Top 50 I think RR should have been using lists of more than 50 to get a more accurate picture of the lower reaches.

            Comment


            • #81
              I agree, that assigning a value of 21 to fill in the blanks for missing chart positions is not the best mathematical approximation/solution. There are 26 total records on these 4 charts for the above week. 27 if you count the Peter Gunn/Yep split that NME did. But letís just ignore that for right now and use 26 for the following example.

              The total sum of all chart points should be considered, maintained, and not exceeded. To exceed a max total would be to change the relationships of the records against one another. Awarding 21 points to all the missing records can unfairly elevate them relative to records at positions 20, 19, 18, etc., when summing the total values. These relationships must be maintained, not artificially manipulated.

              In the 60s, when the BBC put together their weekly combo chart of using weekly component charts of 30, 30, and 50 positions, they would cut off the 50 position chart at 30 positions, and give all records above #30 on that 50 position chart a value of 31, even though you could read the actual position numbers that ranged from 31 to 50! This did artificially inflate some records at the expense of others. Not fair.

              Hereís my solution. For a 26 position chart, the max points would thus be the sum of 1 to 26 = 351. Multiply by 4 for 4 charts, and thatís the grand total for the combo chart = 1404.

              We donít know where the missing 6 records on each chart would appear on a 26 position chart, if they would at all, and some other record most likely would be in there as well. But letís just assume all 26 records would appear on each of the 4 charts. What is the most mathematically correct value to assign the missing positions? Since these 6 records could appear anywhere between #21 and #26, we should assign them the average value = 23.5. Doing this would maintain the total points and the relationships record to record.

              Now, for the ties on the chart, there is a better mathematical way to work them as well. If 2 records tie at #5, it is incorrect to give them both 5 points. They should be given the average of 5 and 6 = 5.5. Same for 2 ties at #1, give them the average of 1 and 2 = 1.5. As they do in golf, they split the money for all tied positions, except for a playoff to decide the winner. No playoffs here, ha.

              So by doing the above 2 things, the max total points are not exceeded, and the relationships record to record are better maintained. No unfair manipulation.

              Now, is it better to assign points to a chart from 1 to 20, or do an inverse from 20 to 1? Or in the case of the 1960s when there were 50 chart positions, from 1 to 50, or 50 to 1? I actually did a study on this, and Iím too lazy to pull out the results, but I think it makes no difference as long as missing records are assigned the correct values for totaling up, as per my solution above. So in that case, the 1 to 20 method would work, and involve 1 less math step (and thus time). I think.

              But my desire is for a combo chart for each week to include every record that hit any of the component charts. Whether itís a total of 51 positions, or 63 positions. Whether the total number of records changes every week. I want to see every record on that combo chart. For historical purposes, donít leave out any record or artist.

              Or the same thing if itís a hierarchy chart instead of a summing combo chart. Much easier to rank the records by hierarchies as thereís no math. Or break hierarchy ties by which chart sampled the most record shops.

              Thoughts? Comments?

              Comment


              • #82
                What your study may have found, as I have after playing around with the figures some more, is that the BBC system provides exactly the same result as the inverse points system. The BBC points range from 3 to 62 corresponds to the inverse points range of 60 to 1. (bbc 3 = inv 60, bbc 4 = inv 59, bbc 5 = inv 58 etc.)

                So both are equally good/bad. Although I suppose it would be easier to make an adjustment to remove the artificial boost to absentees by increasing the score from 21 than using negatives.

                I still think that what you really need are longer input lists - e.g. all 3 root charts Top 50s for which you score each position but produce a Top 30.

                Comment


                • #83
                  Not that I'm in favor of an inverse point method (because of the extra math), but if I were, I wouldn't give negative points for missing records. I'd start all #1 records across the component charts with the same value. If there were 57 total records across 3 charts, all #1's would get a value of 57 points.

                  Let's say there was one Top 30 chart, one Top 40, and one Top 50. The Top 30 chart values would range from 57 down to 28. The missing 27 records would all get a value of the average (or midpoint) of 1 thru 27 = 14. For the Top 40 chart, the values would range from 57 down to 18. The missing 17 records would all get a value of 9. For the Top 50 chart, the values would range from 57 down to 8. The missing 7 records would all get a value of 4. If there are any ties, then you average those values.

                  Then add up the points for all records, and rank 'em. There's your combo chart.

                  Doing the BBC method, 1 thru 57, the missing records would get: 44 points on the Top 30 chart , 49 points on the Top 40, 54 points on the Top 50. Average the ties if there are any, then add ‘em all up, and rank 'em.

                  But the hierarchy method might be better, and less trouble.

                  Comment


                  • #84
                    I agree with all that.

                    Using heirarchy would be simpler, but require an ongoing assessment by the BBC of the companies sampling size. Also, I wonder under what terms (if any) the BBC were allowed to use the charts; priority given to one chart company might not have been acceptable to the others. The BBCs opposition to publication even now suggests that there might have been conditions. I sometimes heard Alan Freeman on POTP say something like "Calculated from charts provided by ..." and specify the chart companies then used, which indicated to me that payments were made.

                    Comment


                    • #85
                      Good points. The BBC average chart did treat each chart equally, so as not to show 'commercial' favoritism to one over the other. So if the BBC had done a weekly chart based on a hierarchy of positions for each record on the component charts, they wouldíve had to treat all charts equally again. So a 1-2-4 record would beat a 1-3-3 record, which would beat a 1-5-5 record, regardless of which chart the positions occurred on.

                      But of course as chart fans, we could now re-do the weekly hierarchies, showing preference to the charts which sampled the most number of record shops. Alan Smith's research uncovered the numbers; though not specific to every single week, the general numbers are far enough apart from each other to give a reasonable ranking distinction. But would be slightly more work.

                      But for historical purposes, a simple equal chart hierarchy is good enough for me. Weekly charts could be created to show chart positions across all 5 charts of the 50s & 60s, and I'd throw in the BBC too just for the heck of it. That's the way it was, as Alan Smith says. There was no 'official' chart, only better charts and worse charts. Show them all!

                      My other hierarchy idea for a weekly chart (which I've posted elsewhere), let me call this the ďgroup hierarchyĒ idea, would be something like this: create a combo weekly chart, the top positions would be the entire component chart which sampled the most record shops; followed by all the additional records on the next highest sampled chart; followed by all the additional records on the next highest sampled chart, etc., etc.

                      So for a weekly chart in 1966, the top positions would be the entire Melody Maker chart; followed by other records in the NME chart that didn't make the MM chart; followed by other records in the Disc chart that didn't make MM or NME; followed by other records in the Record Retailer chart that didn't make MM, NME, or Disc. And add the bubblers and breakers in there somewhere. Most reasonable, and the easiest combo chart of all!

                      To your other point, the need to have three Top 50 root charts in order to produce a combo Top 30, that is a generally valid idea. And I think statistically recommended, as far as 'official' math science goes, as I was taught something like that in various classes over the years. Similar to, in working with decimal numbers, if some numbers are carried out to 5 decimal places, and others to only 3, I think the rule goes that after your final calculation involving these numbers, your result is only truly meaningful to 2 decimals. Something like that, ha.

                      In that light, to the BBC's credit, they didn't calculate a Top 50 chart every week for the 60s, though technically they could have. They may have been limited on available airtime for POTP and TOTP, so no need to go beyond calculating a Top 30.

                      But again, as a music fan, I want to see a chart with every available record on it, every week. For purposes of knowing both (1) what records an artist released, and (2) what was each record's chart position. A simple equal chart hierarchy, with multiple columns, showing each recordís position on each chart, would serve those purposes; give actual historical data, and with no math should be relatively easy to put together. I thinkÖ

                      Comment


                      • #86
                        So if I understand the 'hierarchy of positions' method correctly, the infamous 3-way tie would be untied like this ...

                        1 : Beach Boys 1-2-4
                        2 : Herb Alpert 1-3-3
                        3 : Bee Gees 2-2-3

                        You may have been thinking of this when you gave your examples, because Tom Jones was 1-5-5. Or perhaps that was just a coincidence!
                        Last edited by Splodj; Sun September 1st, 2019, 23:32.

                        Comment


                        • #87
                          I see you are a learned scholar of the charts! Yep, this is the infamous 3-way, for the week ending 31 Aug 1968. But the hierarchy works like this:

                          1 : Beach Boys 1-2-4
                          2 : Herb Alpert 1-3-3
                          3 : Tom Jones 1-5-5
                          4 : Bee Gees 2-2-3
                          5 : Tommy James Shondells 4-5-6
                          6 : Crazy World Arthur Brown 4-6-7
                          7 : Aretha Franklin 6-7-8
                          8 : Amen Corner 7-8-8
                          9 : Herman's Hermits 9-9-9
                          10 : Sly Family Stone 10-10-13
                          11 : Dusty Springfield 10-12-13
                          etc.

                          Tom gets #3 because his best peak #1 beats the Bee Gees best peak #2. Regardless of the 2nd best or 3rd best peaks. If the best 2 peaks of one record tie with another, then you go to the 3rd best peak, etc.

                          So I'm not doing a total points and breaking the tie with a hierarchy, I'm doing a pure hierarchy. But that's an interesting thought, too...

                          Easy peasy, quick, and no math. But more important, meaningful results, and ties are rare. And to historically show everything, I'd rearrange (or add) the hierarchy peaks in separate NME, MM, and RR columns.

                          Comment


                          • #88
                            You're just doing that to push the Bee Gees, my nominee for the top spot, down as far as possible!

                            But seriously, it does seem odd that the record that had been zooming up to the top of the charts should fall back so much and behind one that NME had placed at no. 1 in a maverick way.

                            There is sometimes a positioning in one chart that is way out of kilter with the others, and I think these are what should be ironed out in any averaging. And MM was not immune from throwing up some oddballs.

                            Comment


                            • #89
                              Nah, I'm a big 60s Bee Gees fan, have all their albums up to the disco period, and have several comps for that and the post disco era, ha. But have no fear, The Bee Gees did get the BBC #1 on the very next week.

                              I don't know if you're familiar with Dave Taylor's postings over several forums, but he said the following about this particular week:
                              A 3-way tie at number one, on the 27th Aug 1968. Although that particular week, the NME had a mix up, with Tom Jones & Tommy James, that should have had the Bee Gees on top, but didn't. It did, of course go to the top, in 2, of them, the following week. As the BBC were only averaging 3 charts, this is why the tie occurred. Had they also used Top Pops Magazine, it would of correctly put the Bee Gees, on top on 27th. Denys Jones miscalculated nos. 5 to 8 on 20th Aug. Beach Boys should've been 5, with a tie at 6 with Dusty & Herman, & Bee Gees at 8.

                              Mistakes occur, not only on the component charts, but the BBC calcs as well. They all should have paid better attention, this is important stuff !!

                              I've come up with 6 ways to do a combo chart for the 50s & 60s, plus an optional tiebreaker for one of them. 3 methods use equal chart weighting, 3 use weighting by # of record shops sampled. They all have +'s and -'s, but yes we should strive for the most fair results. Or calculate them all! But yes, a totaling/average method does indeed amplify what's in common among the various charts, and removes the uncommon, so there is validity in doing that.

                              Comment


                              • #90
                                Interesting. What exactly was the NME mix-up - should James have been 1 and Jones 5? In any retrospective compilation of a 'real' combo chart mistakes like that should be corrected.

                                Yes as mentioned, in all but one case, if the BBC had adopted a simple 'trend' method this would have broken the number 1 ties by installing the following week's number one into that position on its own a week earlier. This would have removed the BBC chart's reputation (among my schoolfriends anyway!) of being old hat. Not only did it lag slightly behind what we had seen in the previous week's NME and MM (due to the drag effect of the unseen RR) but it lagged significantly behind the charts we had seen in the Sunday papers before POTP was broadcast.

                                Of course different considerations come into play when compiling a retrospective combo, but the number of MM returns was not so much greater than the others that I would be confident in giving Herb Alpert the top spot for a week. I am sure you know this, but if MM had been adopted as 'official' the same criticism would have been levied about a Beatles and a Stones record being overlooked that all the other charts had at number 1 - Lady Madonna and Little Red Rooster.

                                Comment


                                • #91
                                  Here's Dave Taylor's quote (in an email to me 12 Nov 2012) about the NME mix-up:
                                  Similar story with NME in August 68. Tom Jones was given the points of Tommy James & on the 31st Aug, it prevented the Bee Gees, from going to the top. Had this mistake, not occurred, the 3 way BBC tie, would not of happened. The Bee Gees would of been sole #1, with Herb Alpert & the Beach Boys being joint #2!

                                  Here's yet another Dave post about this, on the Popscene forum 6 Sept 2011:
                                  The EMI average chart gives the #1 to Herb Alpert. This is the big one & settles the BBC joint number one, where BBC had Herb in a tie, with the Bee Gees "I've Gotta Get A Message To You" & the Beach Boys "Do It Again". The Beach Boys were a Record Retailer #1, so 85 shops to over 600 couldn't be right. Another chart mag of the time "Top Pops" Magazine had the Bee Gees at the top this week. "Top Pops" itself was teen mag that made a chart from 12 branches of WH Smith. It lasted from 1968 to March 1971.

                                  NOTE: Since I put this list together, evidence seems to suggest that Record Retailer actually had a unlisted joint number one on 31st Aug 68, with both the Beach Boys & the Bee Gees. Which changes things a bit, because it means that the BBC average, should of showed the Bee Gees as sole number one, with Herb & the Beach Boys at joint #2.

                                  Regarding the lag/drag on the BBC chart, they should have left Record Retailer out of the average altogether! That was the main source of the drag...

                                  Here's another tidbit I only found out last month, from Alan Smith. The NME chart that was published in the Sunday papers was a 'special' NME chart, that only sampled from the last 2 days of the sample week, covering the weekend higher volume sales, thus not the full week. And they sampled only about 20 of their largest record shops out of their much larger total. So the NME Sunday paper chart was indeed much 'faster'.

                                  According to Alan, Melody Maker was sampling around 265 record shops in Aug 1968, NME around 175, Record Retailer 80. Using those values in a weighted average, and the 3 charts as published, Herb Alpert actually does come out on top. Followed by the Bee Gees, then Beach Boys, then Tom Jones. But consider this in light of Dave Taylor's 3 quotes.

                                  Yes, if Melody Maker had been given 'official' status for the 60s, Lady Madonna and Little Red Rooster would lose out, but Please Please Me, Penny Lane, Strawberry Fields Forever, and 19th Nervous Breakdown (just to name a few) would have won. I'll take that trade, ha. But seriously, we should go for what we think is the most correct, most fair chart option; not use possible outcomes to determine that decision. But you know that, ha...

                                  Comment


                                  • #92
                                    I think he is saying that NME gave SOME of the points to Jones that should have been allocated to James, and had this not happened the NME would have placed the Bee Gees at number one. But then you cannot assume everything else is unaffected, because it depends on where Jones falls to and where James rises to. Alpert could have been sole number 2 in the BBC chart as a result.

                                    MM also had a Sunday chart. I think the NotW had MM, and the Mirror and People had NME.

                                    Do you know what caused the RR drag? If it was because their smaller independent stores were out in the sticks and the local yokels there were less hip, then I don't think they should be excluded in a compo retrospective but agree there was a good argument for the BBC to exclude them.

                                    Incidentally I notice that the normally reliable Wikipedia page 'List of UK charts and number-one singles (1952-1969)' has the Bee Gees top in all 3 charts, which is incorrect as it didn't make no. 1 in MM.
                                    Last edited by Splodj; Tue September 3rd, 2019, 11:27. Reason: Further reflection on the Sunday papers

                                    Comment


                                    • #93
                                      Edited post as on further reflection I think the Sundays were the other way round (MM in NotW).
                                      Last edited by Splodj; Tue September 3rd, 2019, 11:30.

                                      Comment


                                      • #94
                                        I must apologize again for my comments concerning the Record Retailer chart. I retract my statement that the BBC should have left them out of their average calculation. They were a legit chart, they did what they did, and provided a Top 50 chart for half the 60s when no other chart did. I occasionally forget my problem is not with RR, it’s with the Official Charts Co for declaring RR the ‘official’ chart for the 60s. Even though RR sampled the fewest number of record shops, and disagreed with the other charts the most often.

                                        So why did RR drag behind the other charts? RR may have calculated their own chart relatively accurately, except for the occasional mistakes common to all charts, but their forced tiebreaking was detrimental, especially if they were looking at previous week data in doing so. If their record shop sampling was not diverse enough geographically across the entire UK, that would have been a problem. And their relatively low number of sampled shops.

                                        If the RR sampling week was of a different time period that the other charts, that would have been a problem, too. But Alan Smith is adamant that all the 60s charts sampled from Monday to Saturday, as per the chart compilers he personally interviewed. Period, end of discussion. Though I did find some posters on other forums that stated some different sampling periods. One Brian Hankin on the Haven forums said RR sampled from Saturday to Friday, then in July 1967 changed to Monday to Saturday. Alan refutes this, Monday to Saturday all the way. I think Dave Taylor may have mentioned some different sampling periods too, but without digging it up I don’t recall which charts and which days.

                                        Nonetheless, RR did compile their charts on a different day than the other charts. RR compiled on a Tuesday, the others on a Monday. RR also changed their publishing date in July 1967, which threw a major wrench into the charts that month. Instead of the usual 80 charts gathered for their calculations, only about 20 shops could adjust that quickly and get their returns in on time. They eventually recovered in time for August.

                                        As the 60s wore on, RR did get closer to the other charts, no doubt due to the increase in shops sampled. Again, all the charts were relatively close to each other, the big hits were the big hits on all the charts. It’s just that RR disagreed the most with the other charts when you look at specific chart positions.

                                        In terms of #1 records of the 60s:

                                        --NME had 7 records at #1 that did not reach #1 on any other chart, and did not have 2 records at #1 that were #1 on all the other charts = total of 9

                                        --Melody Maker had 7 records at #1 that did not reach #1 on any other chart, and did not have 6 records at #1 that were #1 on all the other charts = total of 13

                                        --Record Retailer had 11 records at #1 that did not reach #1 on any other chart, and did not have 6 records at #1 that were #1 on all the other charts = total of 17

                                        Also, RR had the fewest debuts at #1 compared to the other charts.

                                        What could explain all this? Either RR was the most accurate chart, or the least accurate. I think record shop sample size answers the question.

                                        When you look at all the Top 10s, it’s even more apparent. RR not only disagreed the most often with the other charts in terms of record peak position, but also in the distance from the average position. Across every time period of the 60s, whether 5 charts 1960-62, 4 charts 1962-67, or 3 charts 1967-69. The numbers are available in my posts in other threads here on UKMix. RR got closer to the other charts as the decade wore on, but they were always the least in agreement.

                                        I find it interesting that going into the 60s, the other 4 charts had been around for years, able to get things up and running, and get the kinks worked out, into becoming a smooth operation. But the OCC wants to give ‘official’ status to RR as soon as they published their first chart! Totally unreasonable.

                                        So RR, a good chart, not great, they were a piece of the puzzle, a Top 50 chart for half the 60s when the other charts were smaller. OCC, big mistake in choosing RR to represent the 60s. Not warranted at all when you look at the facts and data.

                                        Durn I’ve done it again, written a too long post, ugh…

                                        Comment


                                        • #95
                                          Originally posted by RokinRobinOfLocksley View Post
                                          They were a legit chart, they did what they did, and provided a Top 50 chart for half the 60s when no other chart did.
                                          I've read this topic with great interest, and I don't want to dimish your post to a single line. But to me providing a Top 50 as the chart with the smallest sample size, where the others provided a Top 30, seems more like a negative. For two reasons. The first one being: the lower your sample size the less accurate you will be the lower you go on the chart. And the second reason being: the other charts chose to publish a smaller chart, even though their sample size was larger. Suggesting to me that they weren't confident in the lower ranks, or at the very least deemend it unneccesary within the state of the record business at the time.

                                          So even though RR consistently having the least 'agreeing' chart in the top spots is bad enough for the 'official' UK 1960s chart, it's almost worse to me that OCC also publishes the most uncertain, least accurately sampled ranks 31-50 as official. Especially because RR having a Top 50 throughout the 1960s seems to have been a (big?) factor in chosing it as the de facto chart to represent them.

                                          Comment


                                          • #96
                                            I think that in March 1967, when MM decided the bottom part of it's chart was being compromised, instead of cutting back from 50 to 30 it should have taken measures to combat or minimise that abuse instead.

                                            Comment


                                            • #97
                                              As far as I am aware they did that as well. There was an article to say they would monitor and take action.
                                              http://thechartbook.co.uk - for the latest are best chart book - By Decade!
                                              Now including NME, Record Mirror and Melody Maker from the UK and some Billboard charts

                                              Comment


                                              • #98
                                                I think MM said they would continue to collate a Top 50 internally to enable them to monitor the unpublished 31-50 positions for suspicious behaviour.

                                                On another matter, I have been looking at the thread on Beatles EPs. Without doing a detailed analysis, the POTP chart positions for them appear roughly similar to those in MM, NME and Disc. As RR did not include EPs in their main chart (having a separate EP chart) I wondered if the BBC had a method to ensure that this did not drag down the average score for EPs in their chart.

                                                Comment


                                                • #99
                                                  Splodj, yes on the 1st paragraph, and I think yes on the 2nd. I don't recall Alan Smith or Dave Taylor specifically mentioning how the BBC adjusted their chart for EPs not being on the RR chart (as they had their own separate EP chart), but they must have done so.

                                                  Looking at The Beatles 'Twist and Shout' EP, it peaked on NME, MM, and Disc on 17 Aug 1963 at 4-2-3 for an average of #3. Coincidentally the BBC gave it a #3 for that week, so they must have assigned a value of #3 for RR as well. Otherwise, the BBC would have assigned 'Twist' a #21 for RR, which would have averaged out to a #7.5 .

                                                  Comment


                                                  • Originally posted by Splodj View Post
                                                    I think that in March 1967, when MM decided the bottom part of it's chart was being compromised, instead of cutting back from 50 to 30 it should have taken measures to combat or minimise that abuse instead.
                                                    I think they cut it back as the top 30 was seen as more accurate and had less chance of having records that were being bought up to make the lower regions of the chart, by record companies, hoping that the public would then buy the records. The "official chart" suffered with this problem for ages when it was a top 50, that is why TOTP only used the top 30 as a chart. You can almost certainly bet that any record that spent 4 weeks in the chart, not getting past 41 was bought by the record companies. They would get the record into the chart, then increase the effort on the second week, to ensure it was selling still, which might have resulted in a climb from 49 to say 41. Then they let it go or put less effort into it. If it had taken off then it moved past 41, if it hadn't then it fell to say 45. They might try to maintain the sales, but generally the record might fall out on week 3, or last another week.
                                                    This wasn't just done to new acts. One sales rep told me that they had pushed the Hollies - Long Cool Woman In A Black Dress in this way. Plus other well known acts. The vast majority of acts didn't know this was happening and would have been angry if they had known.
                                                    Education for anyone aged 12 to 16 has made a mess of the world!

                                                    Comment

                                                    Working...
                                                    X