Hi everyone,
I recently set up a MediaWiki (http://server.bluewatersys.com/w90n740/)
and I need to extra the content from it and convert it into LaTeX
syntax for printed documentation. I have googled for a suitable OSS
solution but nothing was apparent.
I would prefer a script written in Python, but any recommendations
would be very welcome.
Do you know of anything suitable?
Kind Regards,
Hugo Vincent,
Bluewater Systems.
I've been putting placeholder images on a lot of articles on en:wp.
e.g. [[Image:Replace this image male.svg]], which goes to
[[Wikipedia:Fromowner]], which asks people to upload an image if they
own one.
I know it's inspired people to add free content images to articles in
several cases. What I'm interested in is numbers. So what I'd need is
a list of edits where one of the SVGs that redirects to
[[Wikipedia:Fromowner]] is replaced with an image. (Checking which of
those are actually free …
[View More]images can come next.)
Is there a tolerably easy way to get this info from a dump? Any
Wikipedia statistics fans who think this'd be easy?
(If the placeholders do work, then it'd also be useful convincing some
wikiprojects to encourage the things. Not that there's ownership of
articles on en:wp, of *course* ...)
- d.
[View Less]
The most recent enwiki dump seems corrupt (CRC failure when bunzipping).
Another person (Nessus) has also noticed this, so it's not just me:
http://meta.wikimedia.org/wiki/Talk:Data_dumps#Broken_image_.28enwiki-20080…
Steps to reproduce:
lsb32@cmt:~/enwiki> md5sum enwiki-20080103-pages-meta-current.xml.bz2
9aa19d3a871071f4895431f19d674650 enwiki-20080103-pages-meta-current.xml.bz2
lsb32@cmt:~/enwiki> bzip2 -tvv
enwiki-20080103-pages-meta-current.xml.…
[View More]bz2 &> bunzip.log
lsb32@cmt:~/enwiki> tail bunzip.log
[3490: huff+mtf rt+rld]
[3491: huff+mtf rt+rld]
[3492: huff+mtf rt+rld]
[3493: huff+mtf rt+rld]
[3494: huff+mtf rt+rld]
[3495: huff+mtf data integrity (CRC) error in data
You can use the `bzip2recover' program to attempt to recover
data from undamaged sections of corrupted files.
lsb32@cmt:~/enwiki> bzip2 -V
bzip2, a block-sorting file compressor. Version 1.0.3, 15-Feb-2005.
Copyright (C) 1996-2005 by Julian Seward.
This program is free software; you can redistribute it and/or modify
it under the terms set out in the LICENSE file, which is included
in the bzip2-1.0 source distribution.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
LICENSE file for more details.
bzip2: I won't write compressed data to a terminal.
bzip2: For help, type: `bzip2 --help'.
lsb32@cmt:~/enwiki>
[View Less]
>
> Message: 8
> Date: Fri, 12 Oct 2007 17:59:22 +0200
> From: GerardM <gerard.meijssen(a)gmail.com>
> Subject: Re: [Wikitech-l] Primary account for single user login
>
> Hoi,
> This issue has been decided. Seniority is not fair either; there are
> hundreds if not thousands of users that have done no or only a few edits and
> I would not consider it fair when a person with say over 10.000 edits should
> have to defer to these typically inactive users.
1. …
[View More]Yes, it's not fair, but this is the truth on wikimedia project that ones
have to admit. Imagine if, all wikimedia sites has a single user login
since when it is first established, the one who first register will own that
username for all wikimedia sites.
2. The person with less edits, doesn't mean that they are less active than the
one with more edits. And according to,
http://en.wikipedia.org/wiki/Wikipedia:Edit_count,
``Edit counts do not necessarily reflect the value of a user's contributions
to the Wikipedia project.''
What if, some users have less edits count,
* since they deliberately edit, preview, edit, and preview the articles,
over and over, before submitting the deliberated versions to wikimedia
sites.
* Some users edit, edit and edit the articles in their offline storage, over
and over, before submitting the only final versions to wikimedia sites.
While some users have more edits count,
* since they often submit so many changes, without previewing it first, and
have to correct the undeliberated edit, over and over.
* Some users often submit so many minor changes, over and over, rather than
accumulate the changes resulting in fewer edits count.
* Some users do so many robot routines by themselves, rather than letting
the real robot to do those tasks.
* Some users often take part in many edit wars.
* Some users often take part in many arguments in many talk pages.
What if, the users with less edits count, try to increase their edits count
to take back the status of primary account.
What if, they decide to change their habit of editing, to increase the
edits count,
* by submitting many edits without deliberated preview,
* by splitting the accumulated changes into many minor edits, and submit
them separately,
* by stopping their robots, and do those robot routines by themselves,
* by joining edit wars.
3. According to 2) above, I think, the better measurement of activeness is to
measure the time between the first edit and the last edit of that username.
The formula will look like this,
activeness = last edit time - first edit time
>
> A choice has been made and as always, there will be people that will find an
> un-justice. There were many discussions and a choice was made. It is not
> good to revisit things continuously, it is good to finish things so that
> there is no point to it any more.
>
> Thanks,
> GerardM
>
> On 10/12/07, Anon Sricharoenchai <anon.hui(a)gmail.com> wrote:
> >
> > According to the conflict resolution process, that the account with
> > most edits is selected as a primary account for that username, this
> > may sound reasonable for the username that is owned by the same person
> > on all wikimedia sites.
> >
> > But the problem will come when the same username on those wikimedia
> > sites is owned by different person and they are actively in used.
> > The active account that has registered first (seniority rule) should
> > rather be considered the primary account.
> > Since, I think the person who register first should own that username
> > on the unified
> > wikimedia sites.
> >
> > Imagine, what if the wikimedia sites have been unified ever since the
> > sites are
> > first established long time ago (that their accounts have never been
> > separated),
> > the person who register first will own that username on all of the
> > wikimedia
> > sites.
> > The person who come after will be unable to use the registered
> > username, and have
> > to choose their alternate username.
> > This logic should also apply on current wikimedia sites, after it have
> > been
> > unified.
> >
[View Less]
I have not seen a comprehensive overview of MediaWiki localisation discussed on the lists I am posting this message to, so I thought I might give it a try. All statistics are based on MediaWiki 1.12 alpha, SVN version r29106.
==Introduction==
*Localisation or L10n - the process of adapting the software to be as familiar as possible to a specific locale (in scope)
*Internationalisation or i18n - the process of ensuring that an application is capable of adapting to local requirements (out of …
[View More]scope)
MediaWiki has a user interface (UI) definition for 319 languages. Of those languages at least 17 language codes are duplicates and/or serve a purpose for usability[1]. Reporting on them, however, is not relevant. So MediaWiki in its current state supports 302 languages. To be able to generate statistics on localisation, a MessagesXx.php file should be present in languages/messages. There currently are 262 such files, of which 16 are redirects from the duplicates/usability group[2]. So MediaWiki has an active in-product localisation for 236 languages. 66 languages have an interface, but simply fall back to English.
The MediaWiki core product recognises several collections of localisable content (three of which are defined in messageTypes.inc):
* 'normal' messages that can be localised (1726)
* optional messages that can be localised, which usually only happens for languages not using a Latin script (161)
* ignored messages that should not be localised (100)
* namespace names and namespace aliases (17)
* skin names (7)
* magic words (120)
* special page names (76)
* other (directionality, date formats, separators, book store lists, link trail, and others)
Localisation of MediaWiki revolves around all of the above. Reporting is done on the normal messages only.
MediaWiki is more than just the core product. On http://www.mediawiki.org/wiki/Category:All_extensions some 750 extensions have some kind of documentation. This analysis will scope only to the code currently present in svn.wikimedia.org/svnroot/mediawiki/trunk. The source code repository contains give or take 230 extensions. Of those 230 extensions, about 140 contain messages that can be visible in the UI in some use case (debugging excluded). Out of those 140, about 10 extensions have an exotic implementation for localisation localisation support at all (just English text in the code). 10 extensions appear to be outdated. I have seen about 5 different 'standard' implementations of i18n in extensions. Since MediaWiki 1.11 there is wfLoadExtensionMessages. Not that many extensions use this yet for message handling. If you can help add more standard i18n support for extensions (an overview can be found at http://translatewiki.net/wiki/User:Siebrand/tobeadded) or help in standardising L10n for extensions, please do not hesitate.
==MediaWiki localisation in practice==
Localisation of MediaWiki is currently done in the following ways I am aware of:
* in local wikis: Sysops on local wikis shape and translate messages to fit their needs. This is being done in wikis that are part of Wikimedia, Wikia, Wikitravel, corporate wikis, etc. This type of localisation has the fewest benefits for the core product and extensions because it happens completely out of the scope of svn committers. I have heard Wikia supports languages that are not supported in the svn version. I would like to get some help in identifying and contacting these communities to try and get their localisations in the core product. Together with SPQRobin, I am trying to get what has been localised in local Wikipedias into the core product and recruit users that worked on the localisation to work on a more centralised way of localisation (see Betawiki)
* through bugzilla/svn: A user of MediaWiki submits patches for core messages and/or extensions. These users are mostly part of a wiki community that is part of Wikimedia. These are usually taken care of by committers raymond, rotemliss, and sometimes others). Some users maintain a language directly on SVN. At the moment, 10-15 languages are maintained this way: Danish, German, Persian, Hebrew, Indonesian, Kazach (3 scripts), Chinese (3 variants), and some more less frequently.
* through Betawiki: Betawiki was founded in mid 2005 by Niklas Laxström. In the years to follow, Betawiki has grown to be a MediaWiki localisation community with over 200 users that has contributed to the localisation of 120 languages each month in the past few months. Users that are only familiar with MediaWiki as a tool can localise almost every aspect of MediaWiki (except for the group 'other' mentioned earlier) in a wiki interface. The work of the translators is regularly committed to svn by nikerabbit, and myself. Betawiki also offers a .po export that enables users to use more advanced translation tools to make their translation. This option was added recently and no translations in this format have been sumitted yet. Betawiki also supports translation of 122 extensions, aiming to support everything that can be supported.
==MediaWiki localisation statistics==
MediaWiki localisation statistics have been around since June 2005 at http://www.mediawiki.org/wiki/Localisation_statistics[3]. Traditionally reports have focused on the complete set of core messages. Recently a small study was done after usage of messages, which resulted in a set of almost 500 'most often used messages in MediaWiki', based on usage of messages on the cluster of Wikimedia (http://translatewiki.net/wiki/Most_often_used_messages_in_MediaWiki).
Up to recently there were no statistics available on the localisation of extensions. Through groupStatistics.php in the extension Translate, these statistics can now be created. Aside from reporting on 'most often used MediaWiki messages', 'MediaWiki messages', and 'all extension messages supported by extension Translate' (short: extension messages). Additionally, a meta extension group of 34 extensions used in the projects of Wikimedia has been created (short: Wikimedia messages). A regularly updated table of these statistics can be found at http://translatewiki.net/wiki/Translating:Group_statistics.
Some (arbitrary) milestones have been set for the four above mentioned collections of messages. For the usability of MediaWiki in a particular language, the group 'core-mostused' is the most important. A language must qualify for MediaWiki to have minimal support for that language. Reaching the milestones for the first two groups is something the Wikimedia language committee considers to use as a requirement for new Wikimedia wikis:
* core-mostused (496 messages): 98%
* wikimedia extensions (354 messages): 90%
* core (1726 messages): 90%
* extensions (1785 messages): 65%
Currently the following numbers of languages have passed the above milestones:
* core-mostused: 47 (15,5% of supported languages)
* wikimedia extensions: 10 (3,3% of supported languages)
* core: 49 (16,2% of supported languages)
* extensions: 7 (2,3% of supported languages)
==Conclusion==
So... Are we doing well on localisation or do we suck? My personal opinion is that we do something in between. Observing that there are some 250 Wikipedias that all use the Wikimedia Commons media repository, and that only 47 languages have a minimal localisation, we could do better. With Single User Login around the corner (isn't it), we must do better. On the other hand, new language projects within Wikimedia all have excellent localisation of the core product. These languages include Asturian, Bikol Central, Lower Sorbian, Extremaduran, and Galician. But where is Hindi, for example, with currently only 7% of core messages translated?
With the Wikimedia Foundation aiming to put MediaWiki to good use in developing countries and products like NGO-in-a-box that include MediaWiki, the potential of MediaWiki as a tool in creating and preserving knowledge in the languages of the world is huge. We have to tap into that potential and *you* (yes, I am glad you read this far and are now reading my appeal) can help. If you know people that are proficient in a language and like contributing to localisation, please point them in the right direction. If you know of organisations that can help localising MediaWiki: please approach them and ask them to help.
We have all the tools now to successfully localise MediaWiki into any of the 7000 or so languages that have been classified in ISO 639-3. We only need one person per language to make it happen. Reaching the first two milestones (core-mostused and wikimedia extensions) takes about 16 hours of work. Using Betawiki or the .po, little to no technical knowledge is required.
This was the pitch. How about we aim to at least double the numbers by the end of 2008 to:
* core-mostused: 120
* wikimedia extensions: 50
* core: 90
* extensions: 20
I would like to wish everyone involved in any aspect of MediaWiki a wonderful 2008.
Cheers!
Siebrand Mazeland
[1] als,crh,iu,kk,kk-cn,kk-kz,kk-tr,ku,sr,sr-jc,sr-jl,zh,zh-cn,zh-sg,zh-hk,zh-min-nan,zh-yue
[2] crh,iu,kk,kk-cn,kk-kz,kk-tr,ku,sr,sr-jc,sr-jl,zh,zh-cn,zh-sg,zh-hk,zh-min-nan,zh-yue
[3] older locations are http://www.mediawiki.org/wiki/Localisation_statistics/stats and
http://meta.wikimedia.org/wiki/Localization_statistics
[View Less]
2008/1/17, Philip Hunt <cabalamat(a)googlemail.com> wrote:
>
> I've been reading about your idea of an "Extrapedia" containing
> articles that Wikipedia doesn't think are notable enough.
>
> I've recently been thinking on similar lines, and have decided to
> create an inclusionist fork of Wikipedia. You are welcome to use my
> wiki (when it's up) as your Extrapedia, if you want.
>
> I notice you say "Now I wonder in general: why do there need to be
> multiple …
[View More]Wikias? Why can't all articles from all Wikias be one wiki?"
>
> Why not indeed? I'm planning a feature I call "micro-wikis" that
> allows anyone to create a sub-wiki of the main wiki.
How would a micro-wiki work? What would it provide? Why not just use
the category system instead? :-) You could modify MediaWiki to allow
searches, recent-changes-views, etc. to show content related to the
category of your choice only. If the user interface for this category filter
system were designed well, maybe it would be a better way to get
what you want?
Cheers,
--
Jason Spiro: corporate trainer, web developer, IT consultant.
I support Linux, UNIX, Windows, and more.
Contact me to discuss your needs and get a free estimate.
Email: info(a)jspiro.com / MSN: jasonspiro(a)hotmail.com
[View Less]
Hi all.
I'm adding some tweaks to the WikiXRay parser of meta-history dumps. I now extract internal, external links, and so on, but I'd also like to extract the plain text (without HTML code and, possibly, also filtering wiki tags).
Does anyone nows a good python library to do that? I believe there should be something out there, as there exist bots and crawlers automating the data extraction process from one wiki to other.
Thanks in advance for your comments.
Felipe.
----------------…
[View More]-----------------
¿Con Mascota por primera vez? - Sé un mejor Amigo
Entra en Yahoo! Respuestas.
[View Less]
Hello,
A more feasible proposal (than global blocking) which I've put forth
before is crosswiki blocking. A Special:BlockCrosswiki page on Meta
could be used by stewards to block a user on any project, preferably
updating the log on that project. The interface would work in
precisely the same way as the current crosswiki Special:Userrights,
with a steward blocking "Pathoschild's_proposal_sucks!@enwiki" from
Meta.
This doesn't have the problems of global blocking, and it would be
extremely …
[View More]useful in stopping wiki-jumping vandals. Without crosswiki
blocking, a steward needs to navigate to each project, register an
account or log in, navigate to Special:Userrights and set admin access
from Meta, navigate to Special:Blockip and block the vandal from the
local project, and switch back to Special:Userrights on Meta to remove
their admin access. By the time they're done, the vandal has hit six
more wikis. Obviously, the current way we do things is ridiculous and
not scalable in the least.
--
Yours cordially,
Jesse Martin (Pathoschild)
[View Less]
We have a pattern abuser showing up on English Wikipedia, creating
page after page full of 1-pixel versions of random images from
throughout the site. This appears to be a slow ramp-up to a larger
denial of service attack on the image servers for en.wp.
The pattern is easy to spot, once they do it, but "easy" in this case
is normal reaction time of admins / alert users, most of whom haven't
seen the pattern up close to know what's going on.
Is there anything that can or should be done ahead …
[View More]of time, at the
site operations level or developer level, to try and keep the presumed
end-case massive DOS attack on the systems from succeeding?
They're telegraphing their actions out pretty obviously, practicing
for what I strongly suspect is coming. But I don't know that we can,
with in-wiki tools, find them / block them out effectively enough...
--
-george william herbert
george.herbert(a)gmail.com
[View Less]
I did some refactoring yesterday on the title prefix search suggestion
backend, and added case-insensitive support as an extension.
The prefix search suggestions are currently used in a couple of
less-visible places: the OpenSearch API interface, and the (disabled)
AJAX search option.
The OpenSearch API can be used by various third-party tools, including
the search bar in Firefox -- in fact Wikipedia will be included by
default as a search engine option in Firefox 3.0.
I'm also now …
[View More]using it to power the Wikipedia search backend for Apple's
Dictionary application in Mac OS X 10.5.
We currently have the built-in AJAX search disabled on Wikimedia sites
in part because the UI is a bit unusual, but it'd be great to have more
nicely integrated as a drop-down into various places where you might be
inputting page titles.
The new default backend code is in the PrefixIndex class, which is now
shared between the OpenSearch and AJAX search front-ends. This, like the
previous code, is case-sensitive, using the existing title indexes. I've
also got them now both handling the Special: namespace (which only AJAX
search did previously) and returning results from the start of a
namespace once you've typed as far as "User:" or "Image:" etc.
More excitingly, it's now easy to swap out this backend with an
extension by handling the PrefixSearchBackend hook.
I've made an implementation of this in the TitleKey extension, which
maintains a table with a case-folded index to allow case-insensitive
lookups. This lets you type in for instance "mother ther" and get
results for "Mother Theresa".
In the future we'll probably want to power this backend at Wikimedia
sites from the Lucene search server, which I believe is getting prefix
support re-added in enhanced form.
We might also consider merging the case-insensitive key field directly
into the page table, but the separate table was quicker to deploy, and
will be easier to scrap if/when we change it. :)
-- brion vibber (brion @ wikimedia.org)
[View Less]